|
Many, many times. Both static and dynamic profiling is an important tool in enterprise development. Yes, your basic steps are useful in a single application, but when you move into a large distributed, multi user system simple code viewing doesn't cut the mustard any more. The time it takes to look through hundreds of methods and calculate which ones will be a problem under which conditions is waisted if you have a decent set of diagnostic tools at your hand.
Real example. I had a junior put together a piece of work. Simple service for validating the entry from a form. Now applying your logic above the code would check out fine. Performance was good on the test system and there were no obvious conflicts, data was being cached, look ups avoided, etc, etc, etc. The memory footprint was a pig, absolutely horrendous. And for each user the problem increased. We needed to pool resources, reduce cached data and in some places slowed down response times.
|
|
|
|
|
I also find this curious. "You didn't use a profile, did you" is a pretty common piece of bullshit that people sling. It's almost always perfectly obviously what the problem is, or that something will be a problem, without even running the code let along profiling it.
Basically what that comes down to is, if you do something stupid, it's going to suck. And preemptively not doing stupid sh*t is not "premature optimization".
Of course it's not always obvious. For example,
xor ecx, ecx
_benchloop:
lzcnt eax, edx
add ecx, 1
jnz _benchloop Why does this measure the latency of lzcnt , instead of its throughput? It doesn't look like it should do that, so in the original code that contained the "problem lzcnt ", that problem gave me quite the chase. Initially I didn't even notice something was wrong.
No actual profiler was involved though.
|
|
|
|
|
They always just say "male, aged 18-35, who used to kick cats", so it's a waste of time even consulting them.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Good luck finding the most important bottle necks in applications of several millions lines of code without a profiler! Hell, even in smaller applications I wouldn't even bother finding bottlenecks without a profiler. Why would you deny yourself a helpful tool?
Wout
|
|
|
|
|
wout de zeeuw wrote: Why would you deny yourself a helpful tool?
Because he thinks the tool sucks
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
They do[^].
(sorry; it was just too easy)
Software Zen: delete this;
|
|
|
|
|
I find the idea of not being able to use every tool available to you is incredibly naive. Granted that you should be able to solve simple performance problems just by looking at them, but some problems are just too big and interwoven to solve like this. Also, it's not always just your code that you're profiling - don't forget that you can often identify problem areas in code that you're using that there might be alternatives to. So yes, I use profilers regularly. I use them because I'm aware enough to know I don't know everything and that there's always something new that I can learn.
|
|
|
|
|
>>> Personally, I think anybody who has been programming a while should be able to look at a block of code for a few minutes and instantly identify why it's slow.
Are you serious!? Maybe for a single method or simple application, but for a complex application there's no way you can "just see it". Here's a simple real world example, recently I used a profiler to see where the bottleneck was in a web service which sits on top of 3 or 4 other layers (and I won't elaborate cause I'm keeping this simple). It turns out the bottleneck was in a piece of Microsoft code used within the ORM we use, so that was refactored to use a different method to do the same thing. No way you would have just seen that by looking, not least cause the code wasn't there to see.
I'd happily admit I also wouldn't be able to just see an issue in my own code some of the time. In the above example we're talking 1000s of lines spread over multiple assemblies, why would I bother trawling through all that when I can just use a profiler?
I was lucky enough to be able to test this for a single thread, good luck if you come across an issue that only occurs when you've got multiple threads involved....
The profiler is your friend.
PS. does this mean you never use SHOWPLAN/EXPLAIN either?
|
|
|
|
|
Absolutely I use a profiler. You're right that looking at code can often provide the answer, but when you're code base is several thousand lines of code split across multiple components using a profiler can at the very least help identify where the problem is.
It's also useful to spot things which might not be a problem with your code. Maybe a library you're using is doing something stupid, perhaps it's a little known feature of the .NET framework which is causing you problems. We recently had an issue with an older piece of software which was running into that, looking at the code would not have helped in the slightest but a profiler quickly sorted out where the problem was.
A profiler should be a part of any developers toolbox, fair enough that it shouldn't be the only solution to every problem, but it should be a tool that you are comfortable using and comfortable to know when to use.
Eagles may soar, but weasels don't get sucked into jet engines
|
|
|
|
|
Inline SQL works fine so long as your queries are parameterised.
|
|
|
|
|
Any reason why parametrized SQL code would be slower than a stored procedure? I mean, unless your statement is so big that the network overhead of sending it would be significant, a parametrized statement would have its execution plan cached by the server, same as the stored procedure.
Of course it's a lot more interesting to get all the results you need in one statement instead of looping over every row and issuing a SQL statement for every row. Same goes for LDAP.
|
|
|
|
|
Additionally, using the stored procedure doesn't eliminate the parameterized SQL code -- so now you have two things that need to be profiled, by two different profilers.
|
|
|
|
|
I rarely use profilers, but there are a few cases when even good programmers may have trouble seeing where code is slow. Sometimes there may be a library call which you think will be fast, but turns out to be slow. This could be due to system call(s) that take longer than the programmer realizes or due to the library function's algorithm working very differently than the programmer realized. We all make assumptions about things that sometimes aren't true. One very big issue is using something like using strlen(some_string) as a constant in a C program. Many programmers use that as though it is a constant in the compiler's mind, when it may not be (GCC has an __attribute__ that lets the compiler see that it is, but it's not standard). That's the kind of thing that could end up in a loop, possibly as part of the loop check, without many programmers realizing that it's a problem. While you might see that code, there are similar circumstances which are less obvious. These are often the results of invisible code introduced by compilers for languages higher than C. Overloaded operators, copy constructors, and destructors can all have much more impact on code than programmers realize. Knowing what kinds of things your language may hide is a good way to avoid these, but some things still slip through.
Other things that are easy to miss is how contentious locks, semaphores, mutexes, and other blocking operations are. These are difficult to figure out sometimes.
There are also times when you need to demonstrate to someone else where the bottle necks in code are. If your cocky junior programmer weren't your subordinate but rather your boss then the output of a profiler would be something that you could use to combat the cockiness. This is particularly useful when you get "Your code is slow! Fix it!" and you can come back with "This external code/hardware/whatever, which isn't under my control, is what is slow."
Another time when using a profiler would be very useful would be if there were disagreement or uncertainty over which of multiple parts of a program were responsible for the different amounts of the slowness. Maybe you need to prioritize the order in which these are addressed. For you or I it may be obvious that a section of code is slower than it has to be, but when comparing two suboptimal sections of code it is very often that eyeballs alone are not enough.
There are also times when a profiler can be used to find bugs that would take a lot of stepping in a debugger to locate.
|
|
|
|
|
I've done both, and yes, using your noggin to choose a better algorithm wins over using a profiler every time. The profiler can only tell you how to make your bubble sort run faster, it can't tell you to choose to implement a quicksort instead.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
The profiler can't tell you how to make your bubble sort faster, only that if you made it faster that would have a major impact on run time (for large data sets). It wouldn't tell you what methods to use to achieve this additional performance, but it would be able to help you understand how and why adding more code sped up the sort (because it executed each piece of code less often).
|
|
|
|
|
I agree that when analyzing a simple block of code, a profiler is probably overkill. I've used a profiler in the past for projects written by multiple people and that have a lot of other process interactions and data coming in at different rates etc.
Profiler stats speak louder and less personally than you going and analyzing other peoples code and giving them the same feedback 
|
|
|
|
|
If I was asked to review a co-workers code and he wasn't interested in my feedback unless I produced profiler outputs, charts, stats, graphs and 5 page PDF presentations, I'd consider that co-worker a douche and a waste of time and go work on something else. If my boss came and asked if I had a chance to review Bob's code, I'd simply say "Yup. I located the issue, but Bob wasn't interested in my feedback, so I went back to working on X.".
That's not a theoretical "What would you do?" . I've done that several times over my career and it generally ended very badly. For Bob.
I was a newb at a job like on my 2nd or 3rd day and I overhead Bob talking with his boss at the next cubicle about an issue that Bob had been working on for 2 weeks with no success. I was familiar with the issue since I was very strong in that area. So I politely went up to them and said "Excuse me, I couldn't help but hearing about the issue. Not to interupt or anything, but I'm certain your issue could be resolved by doing X as I have run into that issue before and am very familiar with it."
Bob pretty much tore me a new one in front of the boss and said that I had no clue what I was talking about as this was my 2nd day on the job and that I should go back to surfing the net or whatever it was he said along those lines.
Turned out I was 100% spot on and Bob was fired for his unprofessionalism .
|
|
|
|
|
I was surfing code to get a feel for what was being done. I was pleased with the naming conventions because I could tell what was going on just by reading the variable names. Then I read a bug because the code told me exactly what it was doing. Went to my manager, expecting to get my hand slapped for daring to read what was none of my business. Instead I was told it was built per specs. So I asked to see the specs. To my surprise I was given the specs.
These were the most exacting specs I'd ever seen. I agreed that the code exactly matched the specs. The manager looked like she thought the subject was closed. Then I said, it's just too bad the specs are asking to have the code do something incorrectly. We ended up in the lab where I said this code should blow up with threading problems at times. The lab tech pipes up, and says they are having problems with code blowing up with threading problems. THEN the manager was willing to look at the math that immediately told me there was a problem.
I finally was able to prove that if the maximum thread count was 100 and the step count was 20, the current count was 100, it would ask for 20 more threads, when it was 120, it would ask for 20 more. It would only stop asked for more threads when you reached 121. I can't remember which but setting a + to - or vs versa would have solved the problem or switching the side that added or subtracted the step count would be another way to fix it.
|
|
|
|
|
Did you also point out to your boss that it doesn't make any sense to have more threads then CPU cores? (or CPU cores x 2 if you turn on hyperthreading). Give or take... If you are banging 100 threads on a 4 core + hyperthreading (8 threads), you'll lose any performance gains from all the context switching .
|
|
|
|
|
For CPU bound threads this is the case, but if threads are waiting on IO or get to sleep, this is not the case. More threads than cores can sometimes even be a winner for CPU intensive programs if physical memory is the bottle neck under some circumstances (swapping memory to disk is IO that might need to make a thread take a nap). It's nice when you can design for nearly 1-to-1 thread/core relationships, but it's often quicker (in terms of developer time) to mostly write all threads in a style similar to stand alone programs rather than to separate everything into queueable tasklets that don't sleep (they never sleep! but they do run to completion and often schedule a continuation tasklet to handle results of an IO operation or to start up after a given passage of time).
|
|
|
|
|
SledgeHammer01 wrote: 6) don't inline SQL code, use stored procs If you can inline SQL code, your DBA is an idiot for permitting you to do so. Of course that's the case in 85% (WAG) of the SQL environments out there.
The only profiler I use is SQL's and yes, I use it to find slow performance code, but that's because I didn't write most of the code, so I don't know it.
I haven't the faintest how to use a profiler to improve performance. To me all it's good for is to find code that could stand improvement.
|
|
|
|
|
A profiler is a tool like anything else. It can sometimes expose inefficiencies that you never would have thought of, or other times be useless (e.g. you can't fix what is using most of the CPU time/memory without a significant redesign). A person's assumptions of what is and isn't efficient can also be flawed.. Even with an experienced developer, when it is based on past observations, which could dramatically change with a single new release of your runtime/external libraries/compiler.
The same could be said about using a debugger vs. just adding debug logging. With a debugger you may be wasting a lot of time tediously stepping through the code execution, instead of letting it run and simply browsing the debug log to narrow down the problem. But sometimes simple debug output isn't enough, and a debugger is required.
|
|
|
|
|
As of last night, for some reason, I could not view some PNG files on a website that I manage. Weird as hell! Changing them to JPGs and they were fine. Anyone had something similar recently?
|
|
|
|
|
Maybe you're hiding the PNG files in the web.config or the images with such extension are denied access to in your settings. Possible errors are like this.
Favourite line: Throw me to them wolves and close the gate up. I am afraid of what will happen to them wolves - Eminem
~! Firewall !~
|
|
|
|
|
Possibly. Will have to check the GoDaddy settings. I know I haven't touched it and there's no on else working on this.
|
|
|
|
|