|Benchmarking is not to determine algorithmic complexity.
It can be used to confirm that your theoretically determined O() is correct, but a few timing values is not an O().
To the OP: In my studies, algorithmic complexity was the sole subject of a quarter-long course. The textbook was a few hundred pages long. I don't remember if I took it during our junior or senior year; in any case, we were quite experienced programmers then, and had completed courses in related subjects like statistics and queue theory. The foundation was in place.
If you want to learn how to determine the algorithmic complexity in general, you may have a long path to walk. There may be quick and easy answers to specific, simple, algorithms (an experienced guy can pop an O() right out of his head for the examples you present), but you would benefit from learning more general methods for complexity estimation.
And: If you are doing timing benchmarks, make sure to test over a broad range of input - here: in terms of problem size. With small problem sets, initial setup, reporting etc. can make up a large part of the time consumed. Also, be very much aware of compiler optimizations. Some compilers are clever shortcutting loops etc. in a way that may severely affect timings. It may be safer to turn all optimization off. (Actually, for verifying a theoretical O(), you might come better out by adding counters to your code in appropriate places, rather than measuring execution times!)