old chesnut of dependency hell, is there no end in sight?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Slackware 14 (Server),OpenSuse 13.2 (Laptop & Desktop),, OpenSuse 13.2 on the wifes lappy
Posts: 781
Rep:
old chesnut of dependency hell, is there no end in sight?
Have been trying to update k3b from the Mandrake cooker ftp site using urpmi.
I am completely fed up with the problems of dependency. If the packagers know that I need package xyz to upgrade package abc, then why don't they just include it inside package abc. Urpmi is hopelessly lacking when it comes to this task. Even when it tells me that other packages are needed, and I agree to their installation, it then fails with telling me that there are package conflicts with previous packages of the same name, but with an earlier release No.
Is there no such thing as backward compatibility?
Can anyone tell me how to get around this problem, or do I just have to put up with it until Linux becomes a grown-up OS?
Sorry to rant on, but this is really starting to p*** me off. I have stayed away from MS for 3 years now, and have happily learnt how to compile software downloaded from source etc. Tinker with .conf files et al. But I'm not getting any younger, and could do with having a slightly easier life when I want to upgrade a piece of software. Am I the only one who sees this as a continuing problem? I would dearly like to see the demise of the monopoly known as Micro$oft, but I fear while these problems still remain, Linux will continue to be for the die hard amongst us, and I for one am ready to throw in the towel.
There are a number of problems here:
1)There is no mechanism to prevent a developer or package manager from requiring libdfjhgh=1.2.3.4 .
2)Once that lib version is required by one package all the others that require something earlier or later will be in conflict. About the only solution is duplicate libs of different names...
One solution is that developers and package manager must have the discipline through some standard to only require libdfjhg>=1.2.3.4. This is backward compatibility and it could work except that from time to time there are major redesigns of a package. It is a pain to keep versions within versions to maintain backward compatibility after a major rewrite.
Another solution is that packages that require many libraries and links can be statically linked so that installation does not have as many requirements. Opera has that choice because they want more portability. Unfortunately, if every package did that, the size of /usr would balloon to overflow even today's large hard drives.
Another solution which may be the only one guaranteed to work now, is to have everything in source form and when you wish to install anything, rebuild it all. That is what the distributors do. This is approaching feasibility with the newest powerful machines and huge, fast drives and memory, but it is not possible for the ordinary user and his machine.
In the long run, something else must be done. I think there are just too many libraries and too much duplication. A particular developer can build and test his package, but there is no way to ensure that can be done on every Linux system. If the object of the game is to make Linux universal, some choices must be made. People who want to share libraries must agree to coordinate so that compatibility is assured. Either we need standardization on a finite number of libraries or we need to agree that rewrites will be coordinated. The diversity of the Linux community means that the maintainer of some light package and the committee in charge of the monster packages will have to agree to release code annually, or semiannually or whatever. This would mean that some would have to wait around for a release and others would have to work to a deadline, just like in the real world where some freedom is exchanged for efficiency. The developers should work from each other's snapshots, not the installing public.
Does this mean we need an umbrella organization to define a compatibility layer for Linux beyond the kernel? Yes. Instead of each distro struggling to patch stuff together a few times a year, with much wasted effort through duplication, we could have one committee of distributors setting the table of work and the distributors synchronize to them. The idea of sharing which is vital to Linux is based on not duplicating effort. Now developers, distributors and installers are all struggling with this mess, the ultimate waste. We need to share in a new way so that the end-user will be able to quickly install any current software on any current system. This change will require discipline but will save a great deal of energy at all levels.
What you advocate will only happen if it is developed using the standard community development model - otherwise the distro world might be divided into two camps: one who subscribes to this over-reaching compatibility system and the other, which does not.
But your basic theory is sound, IMO - you should package that up and post it on an LSB-related forum to get their feedback.
Packages shall have a dependency that indicates which LSB modules are required. LSB module descriptions are dash separated tuples containing the name 'lsb', the module name, and the architecture name. The following dependencies may be used.
lsb-core-arch
This dependency is used to indicate that the application is dependent on features contained in the LSB-Core specification.
lsb-graphics-arch
This dependency is used to indicate that the application is dependent on features contained in the LSB-Graphics module.
Packages shall not depend on other system-provided dependencies. They shall not depend on non-system-provided dependencies unless those dependencies are fulfilled by packages which are part of the same application. A package may only provide a virtual package name which is registered to that application.
I think it means there will be no more dependency Hell, eventually. Unfortunately, I do not see x86 yet. They have two paths for developers to confirm compliance with sanity: a certification path suitable for the big guys involving money and one for the little guys who have to submit a test result from a verification programme that inspects the packages. It looks good, but I would bet it is still going to take some time. The thing that will get it adopted is that everyone will benefit. The little guys stuff will be able to install on all compliant distros, the big guys will be able to gain market share by advertising certification which will soothe customers, and the ordinary folks should find installation more like point and click...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.