*BSDThis forum is for the discussion of all BSD variants.
FreeBSD, OpenBSD, NetBSD, etc.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Linux is my main os, but I like the BSDs as well. I always been a fan of the unix philosophy.
I once got upset because I wanted freebsd to do as much as lnux i.e hardware and applications support. I was even more upset when I tried SunOS, and it did less than freebsd.
BSDs and Oracle SunOS strengths are in the servers and enterprise server space than in desktops. Which is way I use Netbsd as a storage server to backup my files. Even though linux can do server stuff as well, I prefer to use linux as a desktop. That's just me only...
Distribution: LFS 9.0 Custom, Merged Usr, Linux 4.19.x
Posts: 616
Rep:
Quote:
Originally Posted by patrick295767
Kernel is also different.
Different? Yes... An actual microkernel? Nope. Both are modular & monolithic.
Wide spread use of microkernels is a pipe dream. At issue is pragmatism vs theoretical superiority. Modern micro-architectures are designed with monolithic/hybrid kernels in mind. In order for microkernels to really shine, they need hardware support.
Thre are other problems too, like how developers currently write software. They've only recently become accustom to writing parallel friendly code. And cooperative scheduling? (Some have suggested.) That's just plain funny. I'd bet every dollar I have you'd see app developers parking on noops to make sure their application tops the benchmarks.
Anywhoo, I read some CSEE PhD thesis about microkernels a decade or so ago. The author focused on the non-science reasons they're impractical in the real world. (Costs, hardware support, etc.)
This thread is a mine of misinformation and not really worth resurrecting... If I wanted to nitpick: Dragonfly BSD uses a hybrid kernel. But on the whole monolithic kernels are the thing when it comes to the three main *BSDs.
Distribution: LFS 9.0 Custom, Merged Usr, Linux 4.19.x
Posts: 616
Rep:
Quote:
Originally Posted by cynwulf
This thread is a mine of misinformation and not really worth resurrecting... If I wanted to nitpick: Dragonfly BSD uses a hybrid kernel. But on the whole monolithic kernels are the thing when it comes to the three main *BSDs.
I'm still waiting for something new, OS wise, based on 30 years of published CS papers showing better ways to do things. On this front, the type of kernel is the least important. I read a CS paper a couple of weeks ago that showcased better ways of static linking that render shared libraries largely pointless. My first thoughts were: Imagine, no ld vulnerabilities, no versioning/dll-hell, only needing consistency in the kernel ABI.
The purpose of shared libraries is to reduce system resource usage. Code that is identical between processes is gathered into a shared library. As a side benefit, it also allows updating the library and not every individual program that uses that library.
Distribution: LFS 9.0 Custom, Merged Usr, Linux 4.19.x
Posts: 616
Rep:
Quote:
Originally Posted by jpollard
Not better, just different.
The purpose of shared libraries is to reduce system resource usage. Code that is identical between processes is gathered into a shared library. As a side benefit, it also allows updating the library and not every individual program that uses that library.
The paper addressed those very issues and showed that, indeed, it proposes a better way. I know what so(s) are and how they work, and the benefits they provide. That is why I found the paper pretty interesting.
The paper addressed those very issues and showed that, indeed, it proposes a better way. I know what so(s) are and how they work, and the benefits they provide. That is why I found the paper pretty interesting.
What better way? Got a link?
The problem remains that multiple versions of a library may be required - for different applications.
Distribution: LFS 9.0 Custom, Merged Usr, Linux 4.19.x
Posts: 616
Rep:
Quote:
Originally Posted by jpollard
What better way? Got a link?
The problem remains that multiple versions of a library may be required - for different applications.
Doesn't static compilation inherently fix library version problems? Because that's one of the problems static linking is not supposed to have. Here's one of the articles... I've seen other since, but can't find them at the moment. (The pdf under technical report...)
The key: "...performance decrease of at most 4% and a space increase of 40% relative to dynamic linking".
Neither of these shows promise. A cost of 40% more memory is a problem.
A problem is that if a library must be updated... every application has to be rebuilt.
I don't fully agree with part of the conclusion:
"Slinky makes it feasible to replace complicated dynamic linking with simple static linking."
It isn't simple.
1. it requires the kernel to carry hash digests...
2. It requires rebuilding ALL applications if a library has to be patched.
3. It requires two additional utilities to do the linking - though there is the potential of embedding these functions into the existing linker.
Another problem that may exist is that the code sharing is based on a sliding window... This is a problem because libaries are shared at a page boundary (along with whatever ASLR alterations). This can be a problem not addressed (by the paper) with static linking, as you can't relocate with "simple static linking". Granted the code pages are still position independent, so maybe it could still be done, but it will take a lot more effort to do so.
Another problem is licensing, and isn't addressed. Shared libraries on Linux are identified as LGPL, and require the library to be separate from the executable. This allows the executable image to be proprietary... Using the method described violates the licenses. Linking GPL (or LGPL for that matter) libraries requires the application to also be GPL (or LGPL depending on the application).
Yes and no. It fixes the versioning issue... but doesn't fix the patching issue. Using static links requires the application to be relinked every time a patch to the library comes out, but that doesn't mean the major/minor version changes (major, minor, patch level). Patches are SUPPOSED to leave the library major/minor version alone. Thus the libraries with a higher patch level can completely replace one with a lower patch level.
Normally when an application gets linked it SHOULD be linked to use only the major version. Then minor versions may change, and even the patch level can change.
Distribution: LFS 9.0 Custom, Merged Usr, Linux 4.19.x
Posts: 616
Rep:
I see most of your points and understand where you're coming from. But, there are a couple of things in particular...
Quote:
Originally Posted by jpollard
A problem is that if a library must be updated... every application has to be rebuilt.
Depending on whether or not you're speaking specifically of the slink method...
Assuming classic static is the context: Why? If there's a security risk, sure. Other than that, I see no reason something that's working needs to be modified just because there's an updated library function. And, in both of those contexts, the original developer is far more qualified to update the executable than a sys admin updating a system library is. Truuust me, I've deployed enough JVM updates that broke stuff because the developers signed off without actually testing the new libs with there applications. If those same apps were written in Golang I don't imagine the issues would have been as alarming.
On the slink side: I believe they mentioned an update would generate a new hash. But, I haven't really played with it. It's just one of many newer static linking ideas I've seen over the years.
Quote:
Originally Posted by jpollard
Another problem is licensing, and isn't addressed. Shared libraries on Linux are identified as LGPL, and require the library to be separate from the executable. This allows the executable image to be proprietary... Using the method described violates the licenses. Linking GPL (or LGPL for that matter) libraries requires the application to also be GPL (or LGPL depending on the application).
The GPL was designed by ideological individuals whom are, at times, rather extreme. Both thoughts bleed into their software design and nightmares like the LGPL are born. IMHO either make it BSD, or make it GPL and stop wasting people's time with half measures.
Personally, I like the idea of free software for all, but it doesn't pay the bills. The solution is some free, some paid. I'm all for basic tools like programming tools and operating systems being free software, but things like Civilization or Skyrim wouldn't make it out unless people were able to support their families doing so.
I see most of your points and understand where you're coming from. But, there are a couple of things in particular...
Depending on whether or not you're speaking specifically of the slink method...
Assuming classic static is the context: Why? If there's a security risk, sure. Other than that, I see no reason something that's working needs to be modified just because there's an updated library function. And, in both of those contexts, the original developer is far more qualified to update the executable than a sys admin updating a system library is. Truuust me, I've deployed enough JVM updates that broke stuff because the developers signed off without actually testing the new libs with there applications. If those same apps were written in Golang I don't imagine the issues would have been as alarming.
It is either due to a security problem, or functional problem - there is no difference. Sometimes you cannot wait for a developer to do it.
And the "rebuilding" requirement is mentioned in the paper.
And JVM is not exactly something you are allowed to fix.
Quote:
On the slink side: I believe they mentioned an update would generate a new hash. But, I haven't really played with it. It's just one of many newer static linking ideas I've seen over the years.
Yes, it does generate a new hash... but that DOESN'T get it used in other applications without rebuilding.
Slink doesn't appear to take over the purpose of shared libraries. It still increases the size of the executable images, it slows down the kernel, and it doesn't fix the update problem. ALL of these issues are handled by shared libraries.
One very visible place shared libraries work very well in is interfacing to the system. A shared library provides the standardized presentation. Take the "fork" system call for instance. Originally it really was a system call. It isn't now, and hasn't been for quite a while (id now a function that invokes a "clone" system call that uses a lot more parameters)- but programs that were linked years ago will still work. There was no need to rebuild all the programs.
With slink- loss of portability. Executables with the classic static linking had to be rebuilt, but fortunately they were rather few.
Quote:
The GPL was designed by ideological individuals whom are, at times, rather extreme. Both thoughts bleed into their software design and nightmares like the LGPL are born. IMHO either make it BSD, or make it GPL and stop wasting people's time with half measures.
The GPL and LGPL were created to prevent code from being stolen. The LGPL prevents the libraries from being stolen, but does allow their use by proprietary software.
Quote:
Personally, I like the idea of free software for all, but it doesn't pay the bills. The solution is some free, some paid. I'm all for basic tools like programming tools and operating systems being free software, but things like Civilization or Skyrim wouldn't make it out unless people were able to support their families doing so.
Actually, GPL is paying for a LOT of bills. Entire companies can now exist that could not exist before. It prevents the products produced by a group of companies (shared development) from being taken over by a monopoly.
BSD licenses do not - as shown by both Apple and Microsoft. This is possibly the reason all of the BSDs have remained relatively small in usage. Why should a company port BSD to another architecture if others can take the code without any sharing? BSD USED to be the most portable system. It USED to get the most development work done.
BTW, On the license issue - even with proprietary licenses you are not allowed to static link your program and take the result to another system. If you were, it would be MUCH easier to take a Windows application and run it on Linux. The only thing Wine would need to do is handle the system calls...
But doing so means you are also taking Microsoft code... and that violates the license you bought. All you got was the right to run on ONE system. For companies with a site license, only on THOSE systems, and only on Windows.
The GPL and LGPL were created to prevent code from being stolen. The LGPL prevents the libraries from being stolen, but does allow their use by proprietary software.
You clearly have an opinion on this and others have theirs, but GPL code can still be owned and controlled by anyone if they have the funding to buy off or fund the developers. In fact GPL is a 'success' in that many big corporations now have developers working for free rather than having to pay out millions developing in house. They also have a lot of control and influence in those projects, so it's not as "free" as some might assume.
Quote:
Originally Posted by jpollard
BSD licenses do not - as shown by both Apple and Microsoft. This is possibly the reason all of the BSDs have remained relatively small in usage. Why should a company port BSD to another architecture if others can take the code without any sharing? BSD USED to be the most portable system. It USED to get the most development work done.
Yes you could say that proprietary vendors take code and use it (as they are allowed to - and as BSD-style licenced projects allow). But unlike some other GPL licensed software projects, one like FreeBSD for example does not have a board of directors full of reps from the likes of HP, IBM and Intel overlooking things and keeping tabs on how their money is being spent... The *BSD's have their roots in academia and if you look into it, not much has changed in that respect.
Both schemas work for different purposes, both work in their own ways for their own purpose - none are perfect. It's all about balance.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.