LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices

Reply
 
LinkBack Search this Thread
Old 06-19-2009, 01:57 PM   #1
ls100871
LQ Newbie
 
Registered: Jan 2006
Posts: 1

Rep: Reputation: 0
Anyone using insight as an IDE for debugging?


I heard that it's easy to use but not very stable: it crashes quite often. Do you have any experience to share with me? Thanks.
 
Old 06-19-2009, 03:40 PM   #2
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: FreeBSD, Debian, Mint, Puppy
Posts: 3,269

Rep: Reputation: 165Reputation: 165
for debugging?
try ddd
 
Old 06-19-2009, 03:44 PM   #3
johnsfine
Senior Member
 
Registered: Dec 2007
Distribution: Centos
Posts: 4,969

Rep: Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075
I don't do much debugging in Linux. When I debug in Linux, I usually use Insight. It is often a pain to get it to work, but when it works it is a lot easier than gdb.
 
Old 06-19-2009, 04:09 PM   #4
David1357
Senior Member
 
Registered: Aug 2007
Location: South Carolina, U.S.A.
Distribution: Ubuntu, Fedora Core, Red Hat, SUSE, Gentoo, DSL, coLinux, uClinux
Posts: 1,300
Blog Entries: 1

Rep: Reputation: 107Reputation: 107
Quote:
Originally Posted by johnsfine View Post
It is often a pain to get it to work, but when it works it is a lot easier than gdb.
I had a lot of problems with a customized version of Insight provided by a vendor. However, when I downloaded the latest source from RedHat and built it from parts, it was very stable and worked very well.

I had to build it for an ARM target, but the instructions for doing that are widely available, and the procedure is straightforward.
 
Old 06-19-2009, 04:41 PM   #5
johnsfine
Senior Member
 
Registered: Dec 2007
Distribution: Centos
Posts: 4,969

Rep: Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075
The Insight bugs I needed to kludge workarounds for in the source code were in the dual nature of the x86_64 architecture. (By design) it starts out thinking it is X86, then corrects to X86_64. That switch didn't work right, corrupting data that caused malfunctions later.

The Insight developer I exchanged emails with never duplicated the problems, so I don't think there was an official correction. I never understood how the code as written could work even as far as it did, so I never understood the actual bug and only kludged a cover up, not a correction.
 
Old 06-22-2009, 10:29 AM   #6
David1357
Senior Member
 
Registered: Aug 2007
Location: South Carolina, U.S.A.
Distribution: Ubuntu, Fedora Core, Red Hat, SUSE, Gentoo, DSL, coLinux, uClinux
Posts: 1,300
Blog Entries: 1

Rep: Reputation: 107Reputation: 107
Quote:
Originally Posted by johnsfine View Post
The Insight bugs I needed to kludge workarounds for in the source code were in the dual nature of the x86_64 architecture. (By design) it starts out thinking it is X86, then corrects to X86_64. That switch didn't work right, corrupting data that caused malfunctions later.
Do you mean there was a error in GDB? Since Insight is only a front end for GDB, I am not sure why it would care about a switch from 32-bit to 64-bit mode in the processor. Or are you talking about the GDB that comes with Insight?

Quote:
Originally Posted by johnsfine View Post
The Insight developer I exchanged emails with never duplicated the problems, so I don't think there was an official correction. I never understood how the code as written could work even as far as it did, so I never understood the actual bug and only kludged a cover up, not a correction.
Since I do not understand what you are describing, there is a probability that the "Insight developer" may not have understood you. Have you tried the same thing with the latest version of Insight? Have you tried the same thing with the equivalent version of GDB? I believe the Insight and GDB versions match. For example, if you are using GDB 6.8, you should get the same behaviour from Insight 6.8.

Last edited by David1357; 06-22-2009 at 10:32 AM. Reason: Added follow up questions about Insight's built-in GDB.
 
Old 06-22-2009, 11:09 AM   #7
johnsfine
Senior Member
 
Registered: Dec 2007
Distribution: Centos
Posts: 4,969

Rep: Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075
The bugs that I came closest to understanding were just Insight bugs, not GDB bugs. Data structures meant to support Insight's GUI were partially configured for X86 whenever Insight was started for x86_64. Later in startup, they were partially reconfigured for x86_64 in a way that corrupted memory, setting up for a later crash when the corrupted memory locations mattered.

The "bugs" that caused me the most trouble were GDB problems, not Insight bugs. I wish I had more time to investigate the issues or to redo my testing and corrections for a later version. But I'm in the middle of too many other things.

The code I want to debug or profile (related issue, see below) is massively templated and most of the executed code is from hpp files, not cpp files.

I build mainly with the Intel 10 compiler or with GCC 4.3.2. With either of those, the basic tools (in binutils, I think) that read back the correspondence between source line and asm line get badly confused (I think by deep nesting of templated code).

If the same code is compiled by GCC 4.1.2, the same tools for reading the source line to asm line correspondence get much less confused. But for other reasons, I generally can't use gcc 4.1.2.

When you do basic operations, such as stepping, and especially stepping into or over a call, even in an asm view in Insight, GDB depends far more than it should on the correctness of the correspondence between source line and asm line and its responses to errors in that data are far more destructive than they should be.

Obviously, source line level stepping is not reasonable when the data relating source lines to asm lines is not read correctly. But asm level stepping should still work. I dug through the GDB source code enough to understand why flaws in the debug info break even asm level stepping, but not well enough to know how to fix it.

I have trouble, but not as severe, using Oprofile on the same code. The errors connecting source line to asm line in Oprofile match those in Insight (I assume the same underlying tools are used). I see the same low level of error for gcc 4.1.2 generated code and high level of error for gcc 4.3.2 or Intel 10 generated code. With Oprofile (unlike Insight) I had time to try a much newer version and see some internal Oprofile bugs were fixed, but the general failure to understand the debug info was unchanged (also switching to Centos 5.3 and using the binutils included instead of using an older Centos and locally compiling newer binutils).
 
Old 06-22-2009, 12:11 PM   #8
David1357
Senior Member
 
Registered: Aug 2007
Location: South Carolina, U.S.A.
Distribution: Ubuntu, Fedora Core, Red Hat, SUSE, Gentoo, DSL, coLinux, uClinux
Posts: 1,300
Blog Entries: 1

Rep: Reputation: 107Reputation: 107
Quote:
Originally Posted by johnsfine View Post
The bugs that I came closest to understanding were just Insight bugs, not GDB bugs. Data structures meant to support Insight's GUI were partially configured for X86 whenever Insight was started for x86_64. Later in startup, they were partially reconfigured for x86_64 in a way that corrupted memory, setting up for a later crash when the corrupted memory locations mattered.
This is an error I can understand. A lot of applications still do not have the x86_64 coverage needed to find all the latent errors. It sounds like Insight is one of them.

Quote:
Originally Posted by johnsfine View Post
The code I want to debug or profile (related issue, see below) is massively templated and most of the executed code is from hpp files, not cpp files.
The hidden cost of using templates. You have my sympathies.

Quote:
Originally Posted by johnsfine View Post
I build mainly with the Intel 10 compiler or with GCC 4.3.2. With either of those, the basic tools (in binutils, I think) that read back the correspondence between source line and asm line get badly confused (I think by deep nesting of templated code).
Are you compiling with any optimization? Usually any optimization flags destroy your chances of getting line numbers to match code very well.

Quote:
Originally Posted by johnsfine View Post
If the same code is compiled by GCC 4.1.2, the same tools for reading the source line to asm line correspondence get much less confused. But for other reasons, I generally can't use gcc 4.1.2.
It sounds very strange that two completely different compilers (Intel 10 and GCC 4.3.2) have the previous problem, but another version of GCC does not. Maybe there is a library that is shared by Intel 10 and GCC 4.3.2 that has errors in int.
 
Old 06-22-2009, 12:41 PM   #9
johnsfine
Senior Member
 
Registered: Dec 2007
Distribution: Centos
Posts: 4,969

Rep: Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075
Quote:
Originally Posted by David1357 View Post
Are you compiling with any optimization? Usually any optimization flags destroy your chances of getting line numbers to match code very well.
Usually I need optimization. Without optimization I usually can't even reach the section of code I want to debug.

I work mainly in Windows, where many mixtures of debug and release code are less sound than in Linux. I know in Linux I could do more with mixtures of optimized and unoptimized code. But my build process makes that hard and I generraly don't want to do that.

I know the match between source line and asm line can't be perfect with optimization. I don't expect it to be.

I think the compilers are flawed in having the match so bad. It really could be better despite optimization. And I think (but I'm not sure) there must be an extra bug (probably binutils) in reading an even worse match than I think the compilers wrote.

But most seriously, GDB is flawed in depending on a good match which it is neither available nor necessary.

I'm a very good asm programmer. When I don't expect the source to asm match to be sound, I expect to debug in an ASM view. When the match is really bad (as it usually is) mixed asm with source is a useless view.

Insight almost supports a mode where two separate views (one asm and one source) are both synced to the same execution steps. As you step through asm code the source view would jump around wildly even if it correctly tracked the match. When it's confused over the match it would jump to some very stupid places, but that's still OK. If I see a correct source line only occasionally, it is enough. If that mode worked, I'd be thrilled.

Insight itself had a bunch of GUI behavior bugs in that mode, which were an added annoyance, but ultimately not serious.

The serious bugs in that situation were GDB. It depended on correct debug info even for step operations that should have been entirely defined by the asm view.

Quote:
It sounds very strange that two completely different compilers (Intel 10 and GCC 4.3.2) have the previous problem, but another version of GCC does not.
It was strange there was so much difference between 4.1.2 and 4.3.2. I wish I had time to investigate. But it was not a bug vs. no bug situation. 4.1.2 had the problem. Intel 10 had it much worse. 4.3.2 had it slightly worse than even Intel 10.

Quote:
Maybe there is a library that is shared by Intel 10 and GCC 4.3.2 that has errors in int.
No. The problems were mainly not in libraries and the Intel 10 build was using .hpp and .a files bundled with gcc 3.4.6.

Nothing in our Intel 10 build process depends on GCC 4.anything even being installed.

BTW, most of the testing and bug cover kludges etc. that I did was on a version of Insight that reports itself as
GNU gdb (GDB) 6.8.50.20080920-cvs
I haven't even looked to see what is available beyond that.
The version installed later on our Centos 5.3 system just reports itself as
GNU gdb 6.8
I don't even know if that is older or newer. It doesn't work at all on the 64 bit programs I want to debug.

Last edited by johnsfine; 06-22-2009 at 12:44 PM.
 
Old 06-22-2009, 04:58 PM   #10
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 451Reputation: 451Reputation: 451Reputation: 451Reputation: 451
Quote:
Originally Posted by johnsfine View Post
...
I know the match between source line and asm line can't be perfect with optimization. I don't expect it to be.
...

Don't expect any. I mean, the whole function with modern day optimizations can be converted into assembly code with no one-to-one matching between "C" and assembly lines.

Modern optimizers often even change order of statements.

You can't expect existence of auto (on stack) variables from the point of view of debugger.

The problem/issue is not with the debugger, but with compiler.
 
Old 06-22-2009, 06:27 PM   #11
johnsfine
Senior Member
 
Registered: Dec 2007
Distribution: Centos
Posts: 4,969

Rep: Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075Reputation: 1075
Quote:
Originally Posted by Sergei Steshenko View Post
the whole function with modern day optimizations can be converted into assembly code with no one-to-one matching between "C" and assembly lines.
Obviously not one to one. IIUC, the data structures support one to many: one source line to many discontiguous asm lines (as opposed to non optimized one source line to many usually contiguous asm lines).

The reality is many to many. Each source line can correspond to several discontiguous asm lines and each asm line can correspond to several discontiguous source lines. Even if the data structures supported that, it would be a challenge to make debuggers and profilers able to use it.

Quote:
Modern optimizers often even change order of statements.
Major understatement.

Quote:
You can't expect existence of auto (on stack) variables from the point of view of debugger.
But the debug info can even identify which register an auto variable occupies at a given moment.

Quote:
The problem/issue is not with the debugger, but with compiler.
Some of the problem is with the compiler. There are other ways (annotated asm listings etc.) to get the compilers to tell you what source line is associated with each asm line and I know there are major flaws in the logic and some really lame wrong associations get made. But on average, that association isn't too bad.

I'm think some of the problem is in binutils reading the info back. It's hard to be really sure, but certainly the information is much less correct when binutils reads it back than when the compiler gives its original view of the association in an annotated asm listing (which doesn't prove the compiler didn't write a wrong version of information it had just computed correctly).

But some of the problem is with the debugger. If you do a step or step into or step over in asm view (especially step into) the debugger should be able to just do it, regardless of how wrong the debug info might be. Instead the debugger frequently gets any or all of those operations wrong. Despite the user trying to work in an asm view, GDB looks at where it is in a high level view, which is often wrong, then executes a primitive single step operation, then looks again at where it is in a high level view, which is differently wrong, then decides what to do about it, which is totally inane.
 
Old 06-22-2009, 07:01 PM   #12
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 451Reputation: 451Reputation: 451Reputation: 451Reputation: 451
Quote:
Originally Posted by johnsfine View Post
...
There are other ways (annotated asm listings etc.) to get the compilers to tell you what source line is associated with each asm line and I know there are major flaws in the logic and some really lame wrong associations get made. But on average, that association isn't too bad.
...
I drew my conclusions based on annotated assembly listings.

Debugger can't know better than annotated assembly listing.

On a (side) note - because of optimizations modern code is kinda undebuggable. I can find a thread/bug report of mine in which 'gcc' developers explicitly claim that wrongish from the point of view of "C" code (in my example it was integer overflow stuff, and it wan't even my code) will behave differently depending on optimization level.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Effective debugging or improving ones debugging skills Ajit Gunge Programming 3 05-22-2009 09:29 AM
trying to use insight knobby67 Programming 1 08-16-2006 12:40 PM
Difference between kernel - debugging and application debugging topworld Linux - Software 2 03-30-2006 12:50 AM
Visual Debugging and Linux Kernel Debugging Igor007 Programming 0 09-30-2005 10:33 AM
ddd or Insight? lazyboy0001 Programming 1 04-16-2004 10:35 PM


All times are GMT -5. The time now is 05:11 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration