PDA

View Full Version : Bad design



hil
24th November 2004, 08:08 AM
After a small discussion about how Linux handles shared libraries and configuration files, I have something to say here.
1. Applications have to probe if a library is available on the system and to verify if the library meets the version requirement in the ./configure building process. It is very inefficient and time-consuming to write programs and deploy them like this.
Is it the Linux system's job to tell programs what libraries to use and if they are available? There should be an installer program that communicates with the system to query available libraries for dependency and other programs for requirement issues.

For example, if a program depends on libgtk version > 2.0 and requires g++ compiler. The programmer writes the dependency and requirement in a XML file for the installer to read instead of using autoconf or automake. The installer collects enough information and asks the Linux system if the dependency and requirement are met in a smarter way instead of attempting to compile a small main() program to link the functions in libgtk or putting CC=gcc in Makefile. There should be a common API between the installer and the system to ask if a systematic condition is met. For example, a call to function
int system.libs.get_status( to_lib("libgtk", "i686"), 2.0 ) returns all information the installer needs to resolve dependency issues. Likewise, a call to function
int system.rpm.query(to_rpm("gcc-cpp", i686), 2.96). does similar.

2. /etc has long time a configuration files pool. No one could guarantee that a configuration file such as /etc/lilo.conf , /etc/sysconfig/networking-scripts/ifcfg-eth0, or /etc/init.d/rc.5/* are correctly edited. Possible errors are putting hdb3 where there is only hda and IPADDR=192.168.1/9 which should be 192.168.1.9. Linux system should describe how the systematic configuration files are like and ensure that they are ready to be read instead of putting them in text files to cause overhead checks every time before a program reads them. I guess some configuration files are in XML format such as apache and proftpd. That reduces time rewriting parsers, which is great. Errors could be detected while running the XML parser to load the configuration parameter values into the program. But what I want is no error or inconsistency in all of the configuration files. Would it be better if the applications leave this job to the Linux system to do in a general and system wide way?

tashirosgt
24th November 2004, 04:29 PM
Conputer configuration is in a state similar to that of programming before the invention of object oriented programming, or similar to that of operating systems before the invention of BIOS. You have a situation where anything can affect anything. One startup script can call another script which reads a file somewhere and runs another script somewhere else. It's very much like programming in a language where all variables and functions are global. (As an example do "echo $PATH" to see your path and look at how many things are in it that aren't mentioned in .bash_profile ) Given this, I think there will always be stubborn problems (at least for those that have to configure many different hardware arrangements) that can only be solved by hand-editing files. If my video card is mis-configured, I can't use a gui to configure my machine. Am I supposed to edit XML files with a text editor? I think people can do this, but the syntax of most text configuration files seems simpler. And simpler is an important consideration given how often one ends up trying thirty different tweaks to get a system to work right. My opinion is that the format of configuration files is is not the basic problem, the basic problem is that the whole configuration process is obsolete. It needs to be redesigned so it is more compartmentalized, so one can fool with one part and have confidence that nothing else is being screwed up.
"Any script can run any other script considered harmful". Of course that requires more standards to be set. Instead of a "virtual machine" in the sense of a CPU, you need a virtual machine in the sense of an entire desktop computer.

Jman
25th November 2004, 07:22 AM
You just described rpm in your description of something to query libraries and dependencies. RPM is Red Hat's way of dealing with the dependency problem. It's a package database. You mention it in a your proposed API.

What is the "Linux system"? The kernel? That's for basic device management and drivers, not general purpose dependencies, in my opinion. The alternative is a new package format when there already are several: rpm, deb, and tar.gz.

If people hand edit config files eventually they will make an error. Some tools like the display configuration tool make editing the file by hand almost unnecessary. (At least that is the goal.) Right now config files are in a variety of text formats. The alternative is a database, like the Windows registry. (Which I don't like by the way.)

crackers
25th November 2004, 07:49 AM
My opinion is that the format of configuration files is is not the basic problem, the basic problem is that the whole configuration process is obsolete. It needs to be redesigned so it is more compartmentalized, so one can fool with one part and have confidence that nothing else is being screwed up.
The Unix-like systems have actually comparmentalized the configurations - which is why there are so many of them. The "problem" you're seeing is the inter-dependency of the binaries upon one another: e.g. Gnome depends upon X. That's simplistic but shows the point - if you screw up your X config, Gnome ain't gonna work.

This is why "unified" distributions (Debian, Slack, FC/RH, etc.) have become the default medium to install and manage the systems. And that's not likely to change - the main difference between a FC system and a Windows system is the fact that the FC system is comprised of hundreds of programs and their attendant configurations from countless numbers of people working on disparate systems vs. a single, monolithic system.