|
Yes, deleting allocated objects is one of the places that find lots of bugs. Many of these bugs become very difficult to track down. Slow memory leakage is something that eats up lots of support time and turns off many customers.
|
|
|
|
|
Yes, I know. That's why I make sure I never have those bugs. My software has to run non-stop for months and months so leaks and errors of any kind always show up.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Rick York wrote: deleting what you allocate is far too much to ask of programmers
Actually it is. "Use after free" is one of the biggest security risks in C/C++. Also, not removing what's no longer in use eventually leads to memory exhaustion.
These two situations have been known issues since at least 1958 when Lisp was first developed. This is also why all high level business languages, as opposed to embedded or operating system development, contain at least memory garbage collection. Almost all early languages (COBOL, BASIC, FORTRAN, APL, Algol, etc.) have some concept of garbage collection for some data types. What changes with Java was that all data types are now garbage collected unless the programmer explicitly tell the compiler not to do so.
|
|
|
|
|
Nonsense. If that is too much to ask of a programmer then they need to find another line of work.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
A big part of the problem is that it’s not always clear who bears the responsibility for releasing the allocation. If you think otherwise, perhaps it’s you who need to consider alternate careers. Or prepare yourself for a big shock if you are just getting started and have just assumed it is that simple.
|
|
|
|
|
If it isn't clear then you aren't doing it right.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Exactly the kind of rebuttal I would expect from someone who doesn’t have a lot of experience.
Your pronouncement would be more defensible if you had written “somebody” didn’t do it right, but it’s not necessarily the someone who is writing code today and the question of what exactly “it” is has a couple of potential answers. It could be, for example, that a library author meant to conform to an specific predefined protocol and failed. Or it could be they were implementing something new and the documentation they provided is incomplete or incorrect. In especially old code, perhaps they *were* correctly following a known protocol but the protocol itself ended up redefined. Or one of my favorites: a library has multiple functions that accept an allocation as a parameter. Some consume the allocation and others just reference it, and there’s a convention to help you as the library user recognize which are which. But also there’s an old function that doesn’t follow the convention, its behavior is grandfathered in due to being used in existing systems and the footnote mentioning this legacy deviation is cropped off the bottom of the photocopied documentation you were given. I’ve run into all of those scenarios in large scale production systems that I was trying to interface with.
It’s easy to make a simplistic assertion that the only reason this is an issue is that somewhere, sometime, somebody did something wrong. You may be 100% correct about that. But you’re making the very point you’re arguing against. Things like this absolutely happen, and it is in real life one of the most common sources of program misbehavior. We know from decades of experience that this *will* go wrong and that it *will* result in system instability and/or security exposures. So we can cross our fingers and hope after all this time as systems continue to increase in complexity that coders as a population will become perfect at it, or we can automate this tedious, error-prone task for essentially perfect behavior today and let developers spend their time and energy on the real meat of their projects.
|
|
|
|
|
Well ... with the new C# version, at least pointer will come back just for you
|
|
|
|
|
And, after enough versions and upgrades, you may even finally be back to C++
One can only hope.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Good points.
- true, but you can do unmanaged stuff in C# and you can interface fairly easy with any C++ components
- I disagree here. The investment can be done over time and #1 could help here. Is there a cost, absolutely, but far less then migrating to completely other tools.
- Can you give examples? Most libraries are as easy or as difficult as in any other language, it more depends on who made the libraries. The C# framework itself is well documented and well supported.
What's holding C# back is, I think (no expert) the lack of support in devices like TV's, phones, tablets
or other platforms (macOS/Linux) and therefore cross-platform integrations.
|
|
|
|
|
V. wrote: but you can do unmanaged stuff in C# and you can interface fairly easy with any C++ components
Yes, you could write the UI (for example) of the system tools in C#, but why bother? It just adds another requirement (and another failure point) to the system.
V. wrote: The investment can be done over time and #1 could help here. Is there a cost, absolutely, but far less then migrating to completely other tools.
I did not say that it could not be done. I did say that because of the cost it is unlikely to be done, giving the prevalence of Cobol as an example of a similar case.
V. wrote: Can you give examples?
It's not that learning the libraries is more difficult than learning the libraries available for other languages. The issue is the conversion costs - you have to take a productive programmer, expert in C or C++, and turn him into a novice C# programmer. It is true that he/she will eventually learn the C# way of doing things, but in the meantime - they will be less productive. Many companies are unwilling or can't afford to pay this cost.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
It's also valid in the opposite direction: with some time invested you can do the job in C++, which is a more performant language, that has it's own huge collection of (mostly) cross-platform libraries.
I would also add to the list of issues the lack of compatibility between different .net versions.
|
|
|
|
|
Maybe. The idea that c++ is highly efficient comes from the fact that c is. However, if using many of the features of c++ can make a c++ application take more memory and run slower than a managed language. With the advent of dotNet Core 5.0, the performance and the cross platform issue mainly becomes moot. The only real thing lacking is WinForms for non-windows environments. The wpf approach has some advantages for gaming and graphic applications. It however, fails when it comes to line-of-business class of applications that fund most development.
My question would be why do people still think Java (or its cousins like Kotlin) make sense. My take is that there is still a culture that is anti-Microsoft.
|
|
|
|
|
tcruse wrote: My take is that there is still a culture that is anti-Microsoft Now how on earth could such a culture develop? A mystery in a conundrum.
(or something related to how they treat their customers - like releasing defective software since as long ago as DOS 6.0)
[edit] I had misspelled DOS (believe it or not)[/edit]
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
modified 10-Sep-20 10:51am.
|
|
|
|
|
The idea that C++ is highly efficient comes from the zero cost abstraction design, not from anything else. Besides, performance is not everything, languages like C and C++ can offer deterministic execution time as well as object lifetime, stuff that is impossible with managed languages by design, which make them a poor choice for system programming. The fact that NetCore doesn't support WinForms, makes C# a poor choice for cross-platform GUI application development as well, so it ends up being a niche platform for people who find themselves in need to program on Linux but don't want to learn anything besides C#.
"However, if using many of the features of c++ can make a c++ application take more memory and run slower than a managed language." Really? you have an example?
modified 10-Sep-20 10:08am.
|
|
|
|
|
Daniel Pfeffer wrote: 1. For all its advantages, C#, like Java, is unsuited to system-level programming. The kernel in both Windows and Linux is programmed in C and ASM. C# isn't really designed for system-level programming. It's designed for building applications. In that regard after 12 years of using it, I find it's remarkably fluent and concise. That said, I have used it for several Windows services with no trouble.Daniel Pfeffer wrote: 2. Many organizations have an investment in C and C++ code. Conversion to C# would require a major investment. Note that this is also one of the reasons that companies keep using Cobol, so I don't see this changing in the near future. That's true for any language, not just C#.Daniel Pfeffer wrote: 3. C# does have a serious learning curve - not for the language, but for its libraries. If you have learnt to do things in C or C++, converting to C# is far from simple. True. When I started in C#/WPF back in 2008, it took me quite a while to grasp one of the fundamentals of .NET programming: it's in there. C++ and MFC require that you build some application basics yourself. Many of those basics are already present in .NET and whatever UI framework you choose. Instead of saying to yourself "OK, how do I wrap the primitive crap in something elegant in order to make this work", like you do so often in C++, Windows API, and MFC, it's "there's got to be something to do this in .NET; the question is where?"
Software Zen: delete this;
|
|
|
|
|
|
I agree. As an embedded developer (now retired), neither C# or Java would have been a viable option for the products I worked on. In the last 20 years of my career I think the largest amount of memory I had was 256K of RAM and 1M of FLASH (a TI DSP). Most of my projects had way less than this so C# or JAVA were nonstarters.
|
|
|
|
|
I can another half cent to Daniel Pfeffer's response above. For any any server-side programming I would avoid languages that require a runtime engine like Java and C#. Deployment and upgrading on production servers is [or seems to me] tough enough without having to bother with a runtime environment. OFC C# has LINQ and loads of nifty libraries for this and that. But for streaming video like we do it will not cut the cake.
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
I agree, but then again what language does not rely on libraries, be it some framework, a runtime library or the OS? The last computer I had that starts with a totally blank memory and waits for you to key in the first machine code instructions before you can let it run was my little Elf from 1978. And what did I learn on that computer? Write libraries if you don't want to type everything every time.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I somewhat agree with the other responses but not entirely. Plenty of people are ok installing NodeJS/npm on servers so I don't get the runtime argument there. Also C# has excellent interop in my opinion with both C and C++ so I don't see why you'd have to convert your entire code-base.
Sure C# isn't great in ultra-high-performance scenarios where ASM, C, or C++ would be a better fit. No managed language is a good fit for those scenarios. But beyond that I don't get it either.
|
|
|
|
|
Again, I'd agree. I don't expect C# to take over the world, but it does seem to me that it would be a good even sometimes better fit for some new projects and yet seems almost to be the last language of choice.
I find the dependency issue to be a problem with almost every language/system I've worked with, and indeed C# usually seems to fair pretty well on this score. And companies I've worked with seem eager - sometimes too eager - to adopt the latest shiny language on the block. Maybe it's a hangover from the days when 'real' programmers simply didn't want anything that Microsoft have had a hand in? (I used to be a Microsoft hater but much of their stuff these days seems excellent.)
Thank you to anyone taking the time to read my posts.
|
|
|
|
|
I don't get it personally. If I had to guess I'd say people either want to use what's new and shiny like you said, or that people want to justify using what they've always used. C# hasn't always been as platform-agnostic as it is today so I imagine a lot of people used other languages at that time like Java which today is objectively worse than C# in every way in my opinion (e.g. properties; for the love of everything related to software, why has Java been so obstinate on this subject).
|
|
|
|
|
Because Lua is better. 
|
|
|
|
|
Yes, I am generally impressed by Lua. Such a simple idea implemented well.
Thank you to anyone taking the time to read my posts.
|
|
|
|