|Home » Learning Curve » Red Hat Diaries
LEPIX™ 0.9b (Beta)
An important departure. The dawn of a new era.
LEPIX™ is a new operating system based 'loosely' on Unix that attempts to address user complaints that have surfaced in the past few years.
The Unix time sharing system was created by Ken Thompson and Dennis Ritchie of Bell Laboratories in Murray Hill New Jersey. It introduces some very good concepts but unfortunately is geared more towards the computer adept (or scientist) rather than the casual user. Several branches of Unix have appeared over the years and most conform to the original model of Thompson and Ritchie. This is where LEPIX™ takes over.
Amongst the more critical user complaints about Unix are the following.
Losing files. Users are generally intimidated by a hierarchical file system and often forget where they place things. Current tools for locating lost files are bulky and slow and rely on continual 'disk crunching' to keep track of file movements. Users can download or move or save a file and then completely forget where they put it - and current search technologies don't always succeed in finding it for them.
Read-only directories. Users find it hard to grasp that Unix directories are just files like any other and can be write-protected like any other files. Users do not understand what is stored in a directory and why certain file operations succeed whilst others fail. Users assume if a file is not read-only then it must be possible to rename it or remove it. Unfortunately this is not the case.
Read-only files. Users are also perturbed by read-only files. They open these files for viewing; suddenly they realise there are things they need to change; but when attempting to save they are told they do not have 'sufficient permissions'. This scares users. They become unsettled and often panic. They do not immediately understand files can be protected from writing. Further: the system does not ordinarily offer users a way around such dilemmas.
LEPIX™ attempts to remedy the above user complaints (and others).
Overwriting read-only files. As long as a parent directory is not also write-protected the system can - without further authentication - remove the original read-only file and replace it with the user's new copy. The user is of course alerted to what's going on. But instead of scaring the user with cryptic terminology such as 'read-only' and 'write-protected' the generic term 'locked' is used. If a file is read-only and resides in a directory that is not read-only the system can alert the user that the file is 'locked' and give the user an option to 'overwrite'. If the user responds in the affirmative the original file is destroyed and a new file with the same name is stored in its place.
Note the above does not work if the directory itself is read-only. In such case no further files can be added to the directory, no files may be removed from the directory, and no files already in the directory may be renamed - because to do so would require permission on the part of the user to write to the directory - a permission which the user does not have.
Of course it is possible in exceptional cases to also remove the parent directory to make the file write operation successful but this is not advisable in practice as in so doing the system must commit to a possibly lengthy recursive calculation.
Tracking file movements. Unix is particularly troublesome as regards the relationship between physical file and access path. The Unix file system is so constructed that - in theory - any number of paths may resolve to the same physical file.
Note that this is not related to the so called symbolic link: symbolic links are files unto themselves albeit of a different 'type' and as such are interpreted by the system as containing paths to target files.
The confusion with multiple paths resolving to the same physical file does not involve the creation of multiple files - only the creation of multiple directory entries. This is often extremely confusing for the neophyte Unix student as well.
Unix files are identified only by a numerical index. This index, as may be known, is often called the 'inode'. Directories contain only a few bits of data.
- The numerical index known as the inode.
- The length of the current directory entry.
- The type of file represented by the entry. This type may be 'regular', 'directory', 'link', and so forth.
- The length of the name of the file.
- The name of the file.
The entry and name length fields are obviously used only for internal bookkeeping; aside from the type field all a directory contains is a name and an 'inode'.
The system uses the inode to locate information about the file in the volume control block sometimes called the 'ilist'. All entries in the ilist are of the same size, wherefore it is possible to simply multiply the inode by the ilist entry ('iblock') size to arrive at the offset in the ilist where the file's information is stored.
Note that this location does not actually represent the contents of the file - only where the file contents are located. The entry also contains further information about file ownership, permissions, access and modification time stamps, number of links, and so forth; but these factoids are not germane to the discussion at hand.
[It should however be stressed that the information on the number of links for a given file is crucial to the system's survival and simultaneously points to yet another user complaint LEPIX™ attempts to address. Users expect files they've deleted to be 'gone' - and yet with the Unix file system this is only true if the file in question has but one link. If there are other paths to the same physical file in the system then 'deleting' what the user thinks is a file will not actually delete it - it will only 'unlink' the current path to the file. This feature of the Unix system, touted by computer scientists worldwide for its power, is a source of acute aggravation on the part of users of the system.]
Because the system allows coupling multiple paths to the same physical file it is not possible to accurately track file movements. A user might save a file first in one directory, then move it through another program to another directory, then feel lost when returning to the first program to find the file gone. If it were possible to track this file through its inode then the system could always show the user how the file has been moved and maintain access to it.
But as the inode can lead to multiple paths the system cannot know which path is the path most recently accessed by the user.
The LEPIX™ solution is twofold.
- Disallow multiple links wherever possible and keep knowledge of this capability hidden from the user as much as possible.
- Permanently lock files which acquire multiple links. Create instead a hidden repository for these files. Files in this repository will have inodes different from those used by files on the outside. An attempt to access a multi-linked file actually goes through the original inode; but this first inode yields nothing as it is treated as a 'special case'. Information in the file's iblock will instead lead to a new inode in the hidden repository. In this way the system can keep track of file movements. Unfortunately the functionality of multi-linked files is largely lost but as users generally do not understand the concept anyway the loss is deemed both negligible and acceptable.
File permissions. Users are generally intimidated by what is perceived as an extremely complex Unix permissions system. Repeated attempts to show users how eminently simple (and elegant) the Unix system is consistently fail. LEPIX™ addresses this issue by largely making file permissions invisible.
In particular the 'eXecute' bit is completely hidden in the user interface. There are of course pros and cons with such a design decision.
The obvious argument against is of course that users will be forced to operate through their system console to change permissions, requiring them to obtain further skills LEPIX™ is designed to eliminate the need for; it is also pointed out that users, once returned to their graphical environment, will not be able to see the results of their console operations as the eXecute bit is kept hidden.
The obvious argument in favour of such a proposal - the one adopted by the LEPIX™ Foundation - is that users shouldn't be going around creating program files in the first place. Objections to this include citing the inability of users to create so called 'shell scripts' of their own as these scripts need the eXecute bit to run; the LEPIX™ Foundation have therefore decided a series of graphical tools will be created to fill this gap. In all other regards the LEPIX™ Foundation regard the existence of 'eXecutable' files to be the domain of the OS vendor and/or third party software suppliers and not something users should have control over anyway.