This document contains only my personal opinions and calls of judgement, and where any comment is made as to the quality of anybody's work, the comment is an opinion, in my judgement.
[file this blog page at: digg del.icio.us Technorati]
In this useful interview with the Debian project leader (Stefano Zacchiroli) he mentions that he has managed to push for the Debian package manager to be fixed so that it can handle packages with multiple architectures something that is a major limitation of DPKG with respect to the much better RPM.
As far as I can see Debian will keep the other major limitation of DPKG, which is the inability to have two packages with the same name but different versions installed, which is very useful for example for libraries.
It is notable that this project started in 2004 and after a few years of discussion and implementation it is coming to an end in 2011 (admittedly in part because it is not just the package tools that need upgrading, but very many packages).
This will be accompanied by somewhat sensible changes to the filesystem hierarchy to accomodate separate filesystem trees for multiple architectures, changes that resemble a return ancient practice before the IA32 monoculture happened, and after the Linux FHS introduced a very partial and very ill thought workaround.
In 2009 I wanted a bigger better LCD monitor, after using two 17" LCD monitors for while which was a good arrangement but distracting because they had very different colour gamut and temperature, which was narrowly dependent on viweing angle, and in opposite ways for the two monitors; both monitors had such narrow viewing angles that moving the head a bit would shift colors (like for most laptop displays), and the cheaper one so narrow that the top and bottom of the screen had different color temperatures without moving the head. Both monitors also had 18 bit color palette which is a bit limiting, and some backlight bleeding and disuniformity.
In order to have good color fidelity with viewing angle independence an LCD monitor should be using IPS or PVA/MVA cells, which are more expensive than the usual TN cells. I also wanted a large display to compensate for using one display instead of two, and a tall one because I mostly edit text with it rather than watching movies, and a monitor with a good stand and the ability to remove it to mount the display on clampable arm like this (of which I had two for the two monitors).
Therefore I determined that I wanted a 24" 1920×1200 IPS or PVA/MVA display and business oriented monitor rather than a 22-23" 1920×1080 and consumer oriented monitor as the latter are more aimed at widescreen movies and have inflexible and often non removable stands.
This monitor has the same IPS panel as the HP 2475w, manufactured by LG-Philips, and it cost less than the others simply because it had less features, which I did not much care about, so I bought it and I still have it. The good points are:
There are some questionable or negative points for me:
Overall I am very impressed with the 240PW9ES, and I think it is amazingly good, and I have used several other monitors in the same class. For me the tradeoff of fewer features for a lower price has been a success, as I would not use those features anyhow, and the main features (image quality, build quality, stand quality, price) are very good.
Which is not surprising for a Philips product, which is one of the few brands that seems to indicate consistenly good engineering and good value, probably as the result of a corporate culture that focuses on substance more than appearance (the 240PW9ES like a lot of other Philips stuff looks very boring indeed, and I am happy with that).
Then there are the general advantages of its class: the height of 1200 pixels does help, a 24" screen is comfortably large (I can used it at a distance of 60-90cm), IPS does deliver much better and more convenient display experience.
Lesser monitors may look fine, but there is a definite improvement to monitors of the quality of the 240PW9ES, and I am still sometimes amazed to look at my monitor and just how good it is.
Depending on workload it happens fairly often that Linux based systems apparently freeze for noticeable, even long periods of time.
There must be some kind of resource overcommitment leading to long queues and thus high latency, and this must happen to a widely shared resource.
One surprising report shows that in some cases this is
because of huge page
allocations
under memory pressure:
Once upon a time, one just had to assume that, once the system had been running for a while, large chunks of physically-contiguous memory would simply not exist. Virtual memory management tends to fragment such chunks quickly.
So it is a bad idea to assume that huge pages will just be sitting there waiting for a good home; the kernel has to take explicit action to cause those pages to exist.
That action is compaction: moving pages around to defragment the free space and bring free huge pages into existence. Without compaction, features like transparent huge pages would simply not work in any useful way.
My impression is that most of the time this is the disk, as
the Linux disk elevator
algorithms are far
from optimal, and the disk becomes a critical resource on a
desktop system usually because of the accumulation of
unwritten pages in memory that are then bulk written to disk
periodically.
The fix for that problem is to ensure that the amount of unwritten pages in memory is not proportional to memory size, which is the ludicrously silly default, but to to the speed of mass storage device, and I typically aim at no more than 0.5s to 1s worth of unwritten pages.
There are causes for long latencies, usually misguided
optimization that batch operations, such as the laughable
plugging
mechanism in the Linux page cache, and
some can be fixed. For example Con Kolivas has been
working for a a while on
a more responsive CPU scheduler,
as the default one(s) because of an ancient UNIX tradition
will give long running periods to background tasks.
The huge page
story is not surprising in
this context. As the article says, because of huge pages the
Linux memory manager has to maintain pools of different sized
blocks, but does not, which leads to the case where there are
many ordinary pages free but they are not contiguous so a huge
page cannot be allocated. This triggers a moving around of
pages to reduce free page list fragmentation, and this moving
around can involve pages waiting to be written out and thus
long delays. This description is objectionable for several
reasons:
Indeed I have had very few problems with the system seemingly freezing since I have changed the parameters of the Linux page cache flushers to flush unwritten pages far more often than default, and I have written a Linux kernel patch to allow expressing the maximum number of unwritten pages directly instead of as a percentage of memory.