LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 09-22-2008, 07:43 AM   #61
lmenten
LQ Newbie
 
Registered: Apr 2007
Posts: 15

Rep: Reputation: 0

Quote:
Originally Posted by Sergei Steshenko View Post
I think you missed my point regarding Perl.

I meant I use Perl to generate C/C++ code, i.e. I use it as a better than CPP/C++ templates engine.

And, FWIW, Perl is compiled into bytecode (as Java, Python, Ruby, but _not_ as sh, tcl), so it's not an interpreted language.

I did miss your point. I agree. Using Perl gives one close control over the code that is generated. Standard libraries of Perl "templates" would be a powerful tool.

These byte codes (and abstract machine code instruction sets) used to be called "Interpreted Languages." (E.g. the SC of the UCSD Pascal p-System.)

Wikipedia says "In computer programming an interpreted language is a programming language whose implementation often takes the form of an interpreter." Virtual machine code is still interpreted. Wikipedia puts Perl into the class of virtual machine interpreted languages.

Basic and FORTRAN were among the first to be compiled into them. Pascal had a widely used specified language (P-System stack code). If the compiler generates an abstract rather than machine specific code, it is still called an interpreted language implementation. Byte codes still require an interpreter and dynamic languages do incur an overhead. Java and Python are occasionally compiled directly into machine specific language for efficiency. They are still called interpreted languages because that was the target design.
 
Old 09-22-2008, 08:13 AM   #62
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by lmenten View Post
I did miss your point. I agree. Using Perl gives one close control over the code that is generated. Standard libraries of Perl "templates" would be a powerful tool.

These byte codes (and abstract machine code instruction sets) used to be called "Interpreted Languages." (E.g. the SC of the UCSD Pascal p-System.)

Wikipedia says "In computer programming an interpreted language is a programming language whose implementation often takes the form of an interpreter." Virtual machine code is still interpreted. Wikipedia puts Perl into the class of virtual machine interpreted languages.

Basic and FORTRAN were among the first to be compiled into them. Pascal had a widely used specified language (P-System stack code). If the compiler generates an abstract rather than machine specific code, it is still called an interpreted language implementation. Byte codes still require an interpreter and dynamic languages do incur an overhead. Java and Python are occasionally compiled directly into machine specific language for efficiency. They are still called interpreted languages because that was the target design.
Well, my point WRT compiled/interpreted is that Perl and the like first compile the whole code (I'm not talking about runtime bindings here), so stupid mistakes are immediately caught.

For TCL, for example, which is truly interpreted, one needs a static checker to catch the same type of mistakes, and it's a reason I'm trying to avoid the language whenever possible.

FWIW, first Verilog implementations used P-code (I'm from VLSI world).

Well, this latest post of mine is OT, just wanted to make things clear.

...

If you are interested, I can demo my own Perl template engine here, it's very easy to use and open source. It can be really easily combined with any target language (C/C++, Verilog, Perl itself - you name it).
 
Old 09-23-2008, 06:09 PM   #63
lmenten
LQ Newbie
 
Registered: Apr 2007
Posts: 15

Rep: Reputation: 0
Quote:
Originally Posted by Sergei Steshenko View Post
Well, my point WRT compiled/interpreted is that Perl and the like first compile the whole code (I'm not talking about runtime bindings here), so stupid mistakes are immediately caught.

For TCL, for example, which is truly interpreted, one needs a static checker to catch the same type of mistakes, and it's a reason I'm trying to avoid the language whenever possible.

FWIW, first Verilog implementations used P-code (I'm from VLSI world).

Well, this latest post of mine is OT, just wanted to make things clear.

...

If you are interested, I can demo my own Perl template engine here, it's very easy to use and open source. It can be really easily combined with any target language (C/C++, Verilog, Perl itself - you name it).
I am interested. Sounds very cool.
 
Old 10-05-2008, 12:58 PM   #64
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by lmenten View Post
I am interested. Sounds very cool.
Please see the thread I've opened: http://www.linuxquestions.org/questi...e-like-674387/
.
 
Old 07-11-2014, 04:35 PM   #65
Michael Hewitt
LQ Newbie
 
Registered: Jul 2014
Distribution: CentOS
Posts: 4

Rep: Reputation: Disabled
C vs C++ Kernel Code

With regard to using C vs C++ in Linux kernel code, there are two issues at stake:

1. Which language is better suited to kernel programming?
2. Which language produces faster executable code?

#2 is easier to answer, so I will handle it first. The answer is that you only pay for what you use. If you write C code and compile it with a C++ compiler, you will get equivalent performance. To prove this quantitatively, I ran matir's 'c_test.c' program from an earlier post in this thread through gcc and g++ with optimizations enabled and exceptions disabled:

Code:
$ cat c_test.c
#include <stdio.h>

int main(int argc,char **argv){
        int i;
        for(i=0;i<1000;i++)
                printf("%d\n",i);

        return 0;
}
$ diff c_test.c cpp_test.cpp
$ gcc -O -S c_test.c
$ g++ -O --no-exceptions -S cpp_test.cpp
$ wc -l *.s
  31 cpp_test.s
  31 c_test.s
  62 total
$ diff c_test.s cpp_test.s
1c1
<        .file    "c_test.c"
---
>        .file    "cpp_test.cpp"
9c9
< .LFB11:
---
> .LFB12:
28c28
< .LFE11:
---
> .LFE12:
There is literally no difference in the resulting machine code between the C and C++ versions of this program. If, as in matir's C++ example, you use a more powerful console output system such as 'cout', you will pay for it. However, if you do not require such power, the more efficient printf (or printk in the kernel) should be employed. Likewise, if you use C++ exceptions, virtual methods, or RTTI, you will pay for those was well. However, if you do not use these C++ language features, you will not pay for them.

So, to state it as succinctly as possible: C & C++ have *exactly* the same runtime performance if the same functionality is employed.


On to the more difficult question #1: Is C or C++ better suited to kernel programming?

To answer this question, I suggest that we look at Linux kernel code. If I grep the Linux source headers for "int (*" and "void (*", I find 1088 hits. The overwhelming majority of these hits are function pointers inside structs. I will call these what they are: "C classes". I find C classes used to probe device drivers (struct device_driver in device.h), access files (struct file_operations in fs.h), map dma memory (struct dma_map_ops in dma_mapping.h), and encrypt hard drive information (struct crypto_type in algapi.h). I even find "C inheritance" going on, where a C subclass inherits from a C base class (struct ablkcipher_request inherits from crypto_async_request in crypto.h).

In short, the Linux kernel is object-oriented. So, I ask a simple question: Which language is object oriented? C or C++?

But, the benefits of using C++ in the kernel and within device drivers would not stop with the use of classes.

C++ templates provide powerful type-safe reuse mechanisms, and they completely boil away at compile time. Think of templates as type safe macros with excellent inlining and code sharing, maximizing speed and minimizing code size. In our Linux device driver, for example, we have a DMA-mapped command buffer that is represented by a C struct. If I could templatize this command buffer, I could reuse it for different PCI cards within the same device driver with no performance penalty. As it is, I have to use #ifdef macros to specify the different command buffer entries used for each PCI card and then compile each driver separately. I have used templates in many embedded system projects and I have looked at the resulting machine code. In all cases, it the resulting machine code is always extremely small and fast, often far exceeding the performance of C++ STL and WindRiver VxWorks intrinsics (mostly due to heap churn - we avoid the heap whenever possible and use pre-allocated pools of fixed size entries). I like to call this sort of programming "C with templates".

Exceptions enable very clean, very robust low-level code. The device drivers at my company would be far more maintainable if we could throw exceptions internally and clean up allocated resources with destructors, catching the exceptions before returning to the kernel. Here is a quote from Bjarne Strustroup on the overhead of exceptions (I cannot post a link because this is my first ever post to this forum):

Quote:
Modern C++ implementations reduce the overhead of using exceptions to a few percent (say, 3%) and that's compared to no error handling. Writing code with error-return codes and tests is not free either. As a rule of thumb, exception handling is extremely cheap when you don't throw an exception. It costs nothing on some implementations. All the cost is incurred when you throw an exception: that is, "normal code" is faster than code using error-return codes and tests. You incur cost only when you have an error.
If exceptions were baked into the Linux kernel itself, it could be a phenomenally effective way to propagate layered error information up the call stack while freeing resources along the way. Instead of receiving an integer (e.g. -12) from a system call, a stack of nested exception structures could be returned showing the inner-most error and all the other errors along the way at each level. This may sound horribly expensive, and it is. However, throwing an exception only happens during an error condition -- they do not occur during normal execution. That is to say, most of the expense of exception handling occurs when you actually throw an exception, and that is exactly when you want to have the maximum amount of information coming back to the caller for debugging purposes.

Lambdas are an amazingly succinct way to pass callbacks without requiring a whole new struct and function pointer. They are useful when the callee invokes the callback during its execution, and the callback has access to all local variables in the original stack frame. Having utilized lambdas in many other languages and also manually created my own closures to imitate lambda behavior, I can say that C++ lambdas provide a huge amount of value. They even restrict which locals are allowed to be referenced and whether each local is used by reference or by value.

Don't get me wrong -- C++ is a dangerous and relatively ugly language. However, if it is used well, it is a far more effective systems language than C. If C++ were to be used within the Linux kernel and/or device drivers, I would suggest prohibiting many dangerous and/or expensive operations. For example, passing non-const references to functions should be disallowed since there is no indication at the call point that the callee can modify the passed in variables. Static constructors should be flagged as they can cause a hidden performance hit during module load. Many other C++ pitfalls could be disabled and/or flagged as well. The pathetic RTTI can be disabled entirely. Much of the STL (Standard Template Library) could be left out until decisions are made to port various pieces into the kernel.

But overall, as a language, C++ would be a superb choice for Linux kernel and device driver programming, as evidenced by the preponderance of object oriented code already in the Linux kernel.
 
1 members found this post helpful.
Old 07-13-2014, 08:50 PM   #66
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,671
Blog Entries: 4

Rep: Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945
The real problem is that ... most programming languages are designed to run within the environment that is created by the kernel. The code within the kernel itself doesn't have access to the cool stuff that exists in userland. Such as, say, C++ runtime libraries.
 
Old 07-14-2014, 12:26 AM   #67
Michael Hewitt
LQ Newbie
 
Registered: Jul 2014
Distribution: CentOS
Posts: 4

Rep: Reputation: Disabled
Forget runtime libraries

The runtime libraries are indeed cool, but I submit that there would be significant value to both kernel and driver development by using C++ 'in the raw', even without the runtime libs. Pure classes, templates, lambdas, and exceptions could take the current object oriented Linux kernel to a whole new level of maintainability, reusability, and developer efficiency without sacrificng performance.

In my embedded software development, I make very little use of the C++ runtime libs - they are too 'heap happy' to give us the performance we need. Our code looks a lot like kernel code - mostly C, but much cleaner because our classes don't require manual function pointer initialization like the kernel 'C classes'.

Also, we have recently started using exceptions for internal error handling, always catching at the top level before returning to external code, and the improvement in readability and robustness is striking. We never forget to unlock a resource because we have no 'naked locks' - all resource locks are guarded by destructors that are guaranteed to run in the event that the stack unwinds due to an exception, all of which would function efficiently and effectively inside the kernel.

But, the best part has been the templates. Templates provide unprecedented code reusability with absolutely no runtime hit. As mentioned in my previous post, I call this style of programming 'C with templates'. Again, we do not use the C++ STL (Standard Template Library) because it is too 'heap happy' and subbing out the allocators is onerous. Instead, we utilize our own bare bones templates for queues and lists that perform no heap allocations and generally compile down to a small handful of very fast pointer manipulations, but are very readable and reusable across our projects.

I could see a set of reusable kernel templates wrapping INIT_LIST_HEAD, list_add, etc, as well as other APIs such as sg_init_table, sg_next, etc. The kernel functionality could be organized into much more reusable templates that bring type-safety but boil away at compile time to produce code that is as or more efficient than the current concoction of loosely grouped global C function calls and non-type-checked macros. All of this could be done without breaking any existing kernel C code. The templates would simply organize the existing functionality, and they would boil away at compile time, leaving behind calls to the existing C functions. Runtime performance would be equivalent, but understandability, reusability, and type safety would be significantly improved. Developers who want to stick with the C APIs can do so. Developers who prefer the C++ templates could develop happily right alongside them, enjoying the improved type safety and reusability.
 
Old 07-14-2014, 01:31 AM   #68
a4z
Senior Member
 
Registered: Feb 2009
Posts: 1,727

Rep: Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742
I agree that C++ is better than C because it is C plus ..
..what ever you use of C++

and you do not have to use everything,
like gcc, that made very clear rules what is allowed to use when they ported their codebase to C++

some OS vendors do not even ship a C compiler, they only ship a C++ compiler.
Microsoft for example

but Linux is C, and I think it will not change.
so if you want a C++ base for a kernel, I guess you have to come up with something new.
 
Old 07-15-2014, 01:49 AM   #69
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
Since I made the conscious decision to give up C++ and go back to C I can now sleep at night not giving a damn
how to dereference a pure virtual base protected smart reference to an abstract class friend function pointer destructor
blah blah blah blah blah.

I also get a lot more work done.
 
Old 07-15-2014, 03:03 PM   #70
psionl0
Member
 
Registered: Jan 2011
Distribution: slackware_64 14.1
Posts: 722
Blog Entries: 2

Rep: Reputation: 124Reputation: 124
Quote:
Originally Posted by bigearsbilly View Post
Since I made the conscious decision to give up C++ and go back to C I can now sleep at night not giving a damn
how to dereference a pure virtual base protected smart reference to an abstract class friend function pointer destructor
blah blah blah blah blah.
You don't have to use the "blah blah blah blah blah".

Encapsulation, inheritance and method overloading alone seem to be attractive reasons for using C++ especially in a group project. If you can avoid linking to the standard C++ libraries then there should not be any speed or run code size penalties.
 
Old 07-15-2014, 04:34 PM   #71
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by psionl0 View Post
You don't have to use the "blah blah blah blah blah".

Encapsulation, inheritance and method overloading alone seem to be attractive reasons for using C++ especially in a group project. If you can avoid linking to the standard C++ libraries then there should not be any speed or run code size penalties.
Actually not. You don't get the same error detection.

The problem is that a kernel has to operate at the lowest level... And that includes skipping the C++ tendency to mangle function names.

Makes it very hard to debug.
 
Old 07-15-2014, 08:52 PM   #72
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,671
Blog Entries: 4

Rep: Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945Reputation: 3945
Quote:
Originally Posted by Michael Hewitt View Post
The runtime libraries are indeed cool, but I submit that there would be significant value to both kernel and driver development by using C++ 'in the raw', even without the runtime libs. Pure classes, templates, lambdas, and exceptions could take the current object oriented Linux kernel to a whole new level of maintainability, reusability, and developer efficiency without sacrificng performance.
Oops. Kernel-land doesn't have "exceptions" in the C++ sense, which is built upon signals being generated by the kernel and trapped by the runtime library. And this is only one example. The kernel world is much more primitive, since it is fundamentally about hardware-control.

I'm not entirely qualified to say whether C++ code "couldn't" be used in that context, but I think that there are enough potential points-of-contention to make me fairly certain why it isn't being done now.
 
Old 07-15-2014, 10:30 PM   #73
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by sundialsvcs View Post
Oops. Kernel-land doesn't have "exceptions" in the C++ sense, which is built upon signals being generated by the kernel and trapped by the runtime library. And this is only one example. The kernel world is much more primitive, since it is fundamentally about hardware-control.

I'm not entirely qualified to say whether C++ code "couldn't" be used in that context, but I think that there are enough potential points-of-contention to make me fairly certain why it isn't being done now.
And templates cause nothing but bloat and bug propagation, duplicating code with just a structure variant... thus, any bug in the template is now a bug everywhere that template is used... plus the problem that a template is not exactly usable everywhere... each use would have its own "unique" variation... and just makes things harder to debug.

Last edited by jpollard; 07-15-2014 at 10:32 PM.
 
Old 07-16-2014, 12:07 AM   #74
a4z
Senior Member
 
Registered: Feb 2009
Posts: 1,727

Rep: Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742
Quote:
Originally Posted by jpollard View Post
And templates cause nothing but bloat and bug propagation, duplicating code with just a structure variant... thus, any bug in the template is now a bug everywhere that template is used... plus the problem that a template is not exactly usable everywhere... each use would have its own "unique" variation... and just makes things harder to debug.
could you stay by arguments that are facts?

templates are much more secure perform better than everything void*
if you do not believe me than go and measure linked lists for example in C and C++ for various types.

and a bug in library code == bug everywhere is language agnostic


I understand that people that had a bad weekend experience with C++ do not like the language
and that most people do not even get the basics about RAII and the destructor guarantee and why this leads to more secure code.
and this has nothing to do with templates and OOP and so on, it's nearly plain C plus.

but there is all no reason to tell nonsense
 
Old 07-16-2014, 04:29 AM   #75
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
I guess you haven't work in embedded.

There is absolutely no support available to support it.
 
  


Reply

Tags
arm, c++, compile, fine, kernel, linux, modules



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux kernel development vwal_13 Programming 3 03-09-2005 02:27 AM
Linux Device Driver Development aslv Linux - Certification 0 09-27-2004 08:42 AM
Linux Kernel Development impact on Slackware carboncopy Slackware 5 07-28-2004 03:43 PM
Touch screen driver development for Linux peso Linux - Software 1 02-02-2004 12:15 PM
Lexmark Released A Linux Driver Development Kit FearPasion710 Linux - Software 0 09-09-2003 06:18 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 08:37 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration