I`m trying to get a better understanding of how debugging works with programs build in an IDE. I have a guess on how it works but I`m hopping someone will confirm my guesses. So my guess is that the program your building it hooked through its update function (the function through which it gets updated by windows). When your run your program in debug mode the changes taking place inside your program are exchanged through the program update function as parameters with Windows which in its turn sends the data to the IDE.
A Debug version of a program includes information that an external program (the debugger) can read, in order to identify which actual statement is being executed at any point in time. The debugger captures the execution at the beginning of the program or at any specified breakpoints, and can then execute or skip instructions as directed by the user. The debugger can be run from within the IDE, or stand-alone in a terminal window, depending on the system and framework in use. When run in the IDE it is just using the IDE Windows to display information.
Not at all, since you, the user, are in control of the system and what application are running and what they are allowed to do. And Debug information is not a simple text file that anyone can immediately interpret.
The really low level implementation may vary a lot.
A couple variants I have worked with:
To halt execution temporarily, e.g. an explicitly declared breakpoint, or implicit by e.g. a 'continue to next line', the debugger looks up the address of the first instruction generated for that source line in the debug information. It copies and saves the first instruction, and inserts a special breakpoint instruction. Most modern CPUs provide a specific instruction, generating an internal interrupt, causing exeution of an interrupt handler provided by the debugger. Also, most modern CPUs have a mechanism for executing a single instruction and then cause a similar internal interrupt. And, the handlers are run at a low priority so that a higher priority interrupt, e.g. the clock, may preempt the debugger interrupt handler to let other processes have their CPU share.
The debugger user dialog (e.g. to continue exectution, remove the breakpoint, display the current value of some variable etc.) takes place within this interrupt handler. When target execution is resumed, the debugger backs up the program counter to the start of the breakpoint instruction it has inserted, puts the saved "real" instruction into the code, sets the 'single instruction' flag to the CPU, and returns from the debug interrupt.
The single instruction interrupt reinserts the breakpoint instruction, ready for the next time execution passes through this point in code. It resets the single instruction flag and returns from interrupt, and target execution continues until the next breakpoint.
For 'run to next line', the debugger may find the first instruction of every relevant line in the code (usually limited to the current function and the continuation point upon return, but exception handlers may complicate this) and save all the instructions being overwritten. When any of the breakpoints are hit, the same restore-original / single step / reinsert breakpoint procedure is followed. The breakpoints remain until that scope is left, i.e. when the function is exited. The handler for that interrupt will restore all the original code, and then set breakpoints on all line starts in the new scope. Also if another function is called and another scope is entered, the debugger must set breakpoints in that scope.
In the old days when memory was scarce, setting a huge number of line start breakpoints and saving information for each of them would break all memory limits. So when halted at one line, as the single current breakpoint, the debugger would look up the start of every possible next line - one for linear execution, one for a line containing a function call, two for a conditional statement, ... Whichever of them were hit next, the debugger would restore the original code for them all, and repeat the search for possible next lines. This reduced debugger memory requirements a lot, but for every single line, the debugger had to do a possibly complex search for possible next lines.
Older machines might not provide a breakpoint instruction. I have seen it emulated by the OS - all machines have some internal interrupt mechanism for calling OS functions. So rather than a breakpoint instruction, the debugger inserts a breakpoint OS call. The OS will transfer control to the process that has registred itself as the debugger for the target process. On old machines with a small virtual adddress space, there might not be room for both the target and the debugger in one space, so to inspect current values in the target, the debugger might have to request the OS to provide it. So it must also ask the OS to please insert a breakpoint call in the target process at a specific address.
A small variation of that: One debugger injected a tiny routine - a few hundred bytes - in the target process. This routine could be contated over a TCP/IP connection from virtually anywhere, and could on request set and remove breakpoint instructions, return or set the current value of variable etc., at a purely binary level: The remote debugger provided absolute addresses for all operations by the routine. Since all symbol handling etc. was done at the remote debugger machine, the impact on the target system was almost zero both in CPU and memory consumption.
Inserting breakpoints (whether as instructions or as OS calls) is not possible in code in ROM, and in many cases not a viable solution if code is in flash memory. So some processors (typially targeted at embedded systems, where ROMmed/flashed code is common) provide a set of registers: Whenever the program counter is set to the value of one of these registers (with no modifications to program code), a debug interrupt is generated. The debugger need not restore any original instruction (or debug instruction after executing the original), but is limited to, say, four breakpoint locations active at the same time. For 'run to next line' operation, there is no alternative to search the debug information for possible next lines. If the number of possibilities is larger than debug registers available, it must resort to instruction-at-a-time exeution through the current line, an when the program counter goes out of the scope of the current line, use the debug information to see where it ended up, and give control to the debugger operator.
You may conclude that writing the core of a debugger is not a task you would give to a college freshnman. It really requires fingertip control over interrupts and addressing and instruction formats!
This is a new exception Microsoft implemented in Windows 10 something to do with OutPutDebug String though I have that in program I check isdebuggerpresent so dont know why it was raised
This is what I saw when googling
Windows creates a new exception with RaiseException: DBG_PRINTEXCEPTION_WIDE_C (0x4001000A). This new exception code needs 4 parameters. For backwards compatibility it also requests the Ansi version of the string
This is what I added to my SEH handler
if (pExceptionPtrs->ExceptionRecord->ExceptionCode == 0x4001000A)
I have a CrichEditctrl which I am using to display code (as in debugger). When the User whishes to place a breakpoint I TransparentBlt a red bitmap to the left In order to give GDI it own space DeflateRect the Richedit Rect (for the text ) to the left and RcihEdit::SetRect the new smaller size for richedit, Placing the bitmap everything looks fine
However when my code reaches to where the breakpoint was set I highlite the line of code with SetSel and SetSelectionCharFormat this has the effect of wiping out the top of the red bullet bitmap which I dont understand Since the SetRect should ensure that it doesnt go out of the rect bounds. I know rich edit has an OLE interface (which can display bitmaps) however with that too I had problem as the bullet shifted the text
Sorry about that, it was not appropriate to the specifics of your question.
I cannot remember if you are using MFC or not. If you are then perhaps create your own class that inherits from the RichEdit class. If not then you could use the SubClassWindow function to get control of Windows WM_xxx messages and so modify what gets displayed.