Software and hardware annotations q1 2006
This document contains only my personal opinions and calls of
judgement, and where any comment is made as to the quality of
anybody's work, the comment is an opinion, in my judgement.
- 060327
- Just discovered an aspect of
Fedora 5 that to me seems
rather dumb (to say the least): the startup script for
the X font server rewrites the
fonts.dir
files in all the fonts directories configured for
it. It is just a minor incovenience that it does it
incorrectly, ignoring any PostScript Type 1
fonts. I surmise that this was done in a misguided and
ignorant attempt at imitation of that horror,
Fontconfig/XFt2,
but of course this is deeply wrong because one can put
into a fonts.dir
file stuff that cannot be
deduced automatically from scanning font files.
Even worse, the startup script for
xfs
rewrites the fonts.scale
file, which to me seems astonishing, because quite a
bit of that cannot be deduced from font files;
moreover the very logic of the existence of
fonts.scale
is based on that: the idea is
that fonts.dir
might be created if it
does not exist by running mkfontdir
and
then appending fonts.scale
to the
resulting fonts.dir
.
Anyhow the idea is that utilities like
mkfontdir
or ttmkfdir
are
just helpers to produce initial font list files that
can then customized by hand, and I do, because they
produce somewhat incomplete or inappropriate font
configuration.
Looking at the /etc/rc.d/init.d/xfs
script the code looks overwrought and overclever, in a
way that that reminds me of Debian scripts like the
Debian update-
conf series of
scripts. Too bad, as leaving that misguided cleverness
behind is one reason why
I switched to Fedora
after trying out Debian for quite a while.
- 060326
- Recently I have also switched from an
Athlon XP 2000+ (1.6GHz) to an Athlon 64
3000+ (2.0GHz). This has of course required changing
the motherboard, but thanks to a careful choice (same
chipsets) it has required no reinstallation or
extensive changes in MS Windows 2000 or in Fedora
5. The new motherboard ought to allow the Athlon to
support ECC.
As to speed the major differences are in memory
bandwidth and IO speed. Now I am still using the
Athlon 64 in 32 bit mode for now, but I
downloaded an AMD64 GNU/Linux Live CD to look around a
bit. Using hdparm -T /dev/hda
to get an idea of actual memory speeds, I got around
420MiB/s with the Athlon XP 2000+ with a VIA
KT266A chipset, 1,500MiB/s with the Athlon 64
3000+ in 32 bit mode, and 3,300MiB/s with the same in
64 bit mode. The memory sticks are the same, but
instead of running at 266MHz they now run at 400MHz
(they are 400MHz sticks, but of course they can be run
at lower speeds).
As to disk speed I normally run disk-disk backups.
With the Athlon XP 2000+ they used to run at
25-30MiB/s with around 50-60% CPU usage. Now they run
at 35-40MiB/s with 25-35% CPU usage, this on disks
capable of around 5-60MiB/s transfer rates. Note that
nothing other than CPU and motherboard has changed, in
particular the memory sticks are the same and even the
ATA host adapter is the same, because I use a PCI card
for that, not the motherboard provided one.
So I get several times faster memory speeds and
around 40% greater IO speed just by getting a faster
processor with a different motherboard, since the
memory sticks and the ATA host adapter are the same.
For the memory it is pretty obvious that the
memory controller embedded in the Athlon 64 CPUs
is much, much more efficient than the one in the north
bridge of my previous chipset, as even if they are now
clocked at 150% the rate, they now seem to deliver
300% the bandwidth in 32 bit mode and 670% in 64 bit
mode, and what about IO.
Put another way, the kernel cannot manage
back-to-back reads or writes even with a
Athlon XP at 1.6GHz, and evidently not with an
Athlon 64 either. However, with either sequential
reading from a single disk, as in hdparm -t
/dev/hda3
does obtain almost the maximum
theoretical bandwidth of 55-60MiB/s with around
15-20%CPU usage; curiously dd bs=16k
if=/dev/hda3 of=/dev/null
runs at
60-65MiB/s. Even more curiously, pure sequential
writes are faster, as dd bs=16k
if=/dev/zero of=/dev/hda3
runs at 65-70MiB/s.
So how comes that copying between two disks (and I
made sure they are on different ATA cables) is rather
slower than either reading or writing from a single
one? Well I suspect that a large of the story is that
the CPU overhead for both managing the buffer cache
and for managing IO in the Linux kernel is rather high
and IO scheduling cost are so much lower that the
interval between successive reads and successive
writes is much smaller, allowing to utilize the disc
subsystem better.
This would account for both why same disks and
same host adapter perform so much better on a faster
CPU, and why just reading or just writing perform so
much better than reading and writing interleaved.
My final comment is the usual one: obviously
kernel developers
have the money to enjoy top end PCs
and thus don't much notice those huge CPU overheads,
so it is not their itch; as to those who are
unfortunate enough to suffer that itch, well they are
outsiders, and kernel development seems to me to be
ever increasingly territorial, as owning
a chunk of the kernel often means getting and keeping
a well paid and cool job.
- 060325
- So I have upgraded to Fedora 5. I have been using
Fedora 4 now for
several months
and overall I am fairly pleased. The best news are
that there are regular updates of a Fedora release
until the next released and for some time after that.
Which means that it is fairly stable but not totally
frozen, and older versions are still updated for a
while after newer ones come out, making it possible to
update once instead of twice a year (but I still
update twice a year, as I like to track the latest
stuff).
Given that Fedora is a testbed for RedHat's
product line updating from a version to another can be
fairly imposing, especially if packages from non
official repositories have been installed. This has
made my own update from Fedora 4 to 5 quite a bit more
involved than I had expected.
But then there are several non official
repositories for the less popular packages, and some
of them are pretty well maintained.
The major drawback of Fedora is that Red Hat
are evolving it towards things like udev
(which I have disabled) that are very unlike UNIX, and
anyhow seem to me poorly conceived and realized hacks.
But then most Linux developers are doing like that,
because of the
Microsoft cultural hegemony...
- 060323
- Thanks to an email on XFS mailing list I have
discovered this LKML article about
very delayed written block saving under Linux
which reports that without the included patch the
Linux page cache system, in some important cases,
delays saving modified pages by a long time.
Now I understand why I had to run a
while sleep 1; do sync; done
loop in parallel to my disc-to-disc backups, and why
they, involving large amounts of writing, had some
undesirable side effects; for example a sync
under them would take a long time. Well, with the
included patch that is mostly fixed. Good to know.
Curious that it has not yet made into the kernel.
- 060322
- Just read two recent interesting tests
(XBitLabs part 1,
XBitLabs part 2,
HardOCP)
of whether current PC games are more CPU or GPU
bound. The tests are mostly about with rather
advanced games, on a rather fast GPU, and with rather
fast CPUs. The conclusion is that many current games
are CPU bound for Athlon 64 CPUs below 2.4GHz and
for Pentium 4 CPUs below 3.2GHz.
Of course with cheaper GPUs the GPU becomes the
bottleneck, but then one would play at less than
1600x1200 with
AA and
AF
both turned on.
On my poor Athlon XP 2000+ (1.6GHz, 256KiB
cache) most recent games I play are CPU bound, in
particular Doom 3, Quake 4,
the Battlefield 2 demo, and to some extent
F.E.A.R. too.
The amazing aspect of the tests however is that a
few games, for example
Quake 4
and
Serious Sam 2
seem to multithread pretty well, when the graphics
card is fast enough that they become CPU bound.
For these games the Athlon X2 of a given
rating delivers a higher frame rate than the
Athlon 64 of the same rating, even if the speed
of the two cores is slower; for example the
X2 3800+ with two 2.0GHz cores slightly outperforms
the 64 3800+ with a single 2.4GHz core.
This could also be due to the greater cache, as
the X2 has 2x512KiB and the 64 512KiB only, but the
CPU utilization graphs make it clear that both CPUs
get engaged. Still the advantage is not awesome, as
two 2.0GHz cores are roughly equivalent to a single
2.5GHz one, or an efficiency of around 60% overall, or
seen another way, the second CPU adds only 25% to
performance (but from the CPU graphs in some article a
with a lot more effort).
Finally the tests are yet another demonstration of
just how large is the price/performance advantage of
Athlon/Sempron CPUs over Pentium 4/Celeron D
ones especially in the middle and lower price ranges.
- 060320
- Videogames have achieved indeed some rather
important status, as the Financial Times
devotes a
full article
to the impending release of the
Godfather videogame,
noting that its delay reduced
EA's
market value by US$800m, or 5% of its total valuation.
- 060319c
- Just finally made available
sabifire
,
a fairly elaborate shell script to set up a good set
of
Netfilter
(a.k.a.
iptables
) rules suitable for an internet
leaf node, either as a standalone system or the
gateway for a single subnet. I have used it for a
couple of year on my own home PC and colocated web
server, together with
sabishape
.
Apart from being carefully designed (it demonstrates,
like sabishape
, how an elegant shell
script should look like), it has some unusual
features, like the ability to set up much the same
rules for IPv6 as for IP.
An outline of the design of the rules and of the
script itself is contained in
my draft Linux iptables
.
One of the interesting aspects is just how non trivial
the script is. In part yes, because it is fairly
robustly engineered, but in part because the subject
area is intrinsically subtle, complex and difficult.
After some time I have stopped trying to help people
in the IRC channel
#iptables
because in general they try to do very difficult
things without having much of an idea of just how hard
it is. Sure, anybody can use the
iptables
command, and it is very easy to
do so; but how easy it is to use a command does not
correspond to how easy it is to use well a
command.
In some way free software
has
lowered too much the perceived barriers to
usage. No question that it has lowered them, and that
it has been beneficial, as a lot of the mystique of
writing operating system and network code was
excessive. But an unwelcome side effect of easy
availability and transparency has been that in the
minds of some users there is now the impression that
if something is physically accessible, it is also
accessible without skills (however this has long been
the assumption of hiring managers in the IT industry).
Another symptom of this attitude is the large
number of people that attempt to compile recent
versions of software from sources without being
programmers, or being programmers without building
skills (surprisingly rare even among experienced ones
I have often noticed). Sometimes these packages
contain step-by-step instructions and sometimes they
even work in every possible context.
But many times over they do not, and the users
ask for help on IRC/mailing lists/Usenet about issues
that are quite difficult to describe, never mind master.
- 060319b
- The interesting array processor architecture from
Clearspeed may be
of interest to AMD.
It is somewhat Transputer like, with 96 processor with
6KiB each and
128KiB of shared memory, which makes it sound like an
optimizer's extreme challenge, much more so than PS2
and PS3; the company's previous incarnation as
PixelFusion was about using
it as a fully programmable graphics accelerator.
Interesting stuff, especially as it allegedly only
drawn 10W.
- 060319
- Just cleaned the mesh dust filter at the bottom of my
Lian-Li PC-60 case
where the bottom fans and hard disks are. It was not
that clogged, still hard disk temperatures went down
4C° from 37C° and CPU and chipset temperatures
down 2C° from 50C° and 34C° (ambient
temperature around 20C°). I occasionally vacuum my
CPU heatsinkfan and motherboard to prevent trouble. A
guy I know burned his CPU because of dust buildup in
the heatsink and in the CPU fan, even if it was not as
bad
as this.
- 060315b
- Having had a fresh look, I have finally managed to
figure out at least some part of how to
define and make use of
parametric
virtual devices in an
ALSA
configuration file.
Givent that I think that the documentation is
extraordinarily (and perhaps deliberately) bad, this
required quite a bit of experimentation. I shall
update my
sample
asound.con
and my
Linux ALSA notes
with the details soon.
- 060315
- The usual
sycophants of the Microsoft way
of doing things have infested the ALSA code too,
consider for example these two entirely awesome error
messages in the ALSA library:
ALSA lib pcm_hw.c:1305:(_snd_pcm_hw_open) Invalid value for card
ALSA lib pcm_dmix.c:832:(snd_pcm_dmix_open) unable to open slave
Which value? Which slave?
Have a guess!
must give a wonderful
sense of empowerment to some software engineers...
- 060314c
- Sometimes the details matter, and I noticed a detail
as I was helping someone get an X server modeline for an
Acer AL1916WS
LCD display (which apparently is pretty good and at
£180 not that expensive). The detail is that the
monitor has a 1440x900 pixel size and is sold as a 19"
LCD monitor, as it is widescreen and has a
diagonal of 19" or equivalent.
The amusing aspect is that a regular 17" LCD with a
pixel size of 1280x1024 has 1,310,720 pixels, and that
the alleged 19" monitor has 1,296,000 pixels, which is
just a bit lower.
What is happening here is that the alleged 19"
monitor is by my reckoning just equivalent to a 17"
one, just with an 8x5 aspect ratio that being more
asymmetrical than the 5x4 results in a longer diagonal.
But classifying monitors by diagonal size is usually
done under the assumption that the aspect ratio is 4x3
or close to it. Quoting monitor diagonals for rather
oblong monitors seems a bit misleading to me.
- 060314b
- It is little known that
pragmatics
is
an important aspect of programming, because pragmatics
really is about programming-as-communication, not
merely programming-as-tool. As to pragmatics, one
important aspect of the
UNIX style
is that data should be easily sortable. Well, as to
that unfortunately a few very common datatypes don't
sort naturally, where naturally
means in
lexicographic order: dates, internet domain names, and
IP address for example. Recent variants of the
sort
command (for example
msort
)
can handle properly month names for example, and that
helps a fair bit, but internet domain names and dotted
quad IP addresses are still a problem.
Part of the issue is that internet domain names
violate another important rule of pragmatics, that in
left-to-right scripts one should put the most specific
part of a datum to the right. That is a domain name
like WWW.sabi.co.UK
should really be
written as UK.co.sabi.WWW
; similarly
email addresses should be written not as
localpart@
domainname
but viceversa, as in com.example.ma.boston@ted
.
Curiously two non-internet systems had this right,
that is the (otherwise unlamented) UK ISO system and
UUCP style mail addresses (which were relative, also a
good idea).
As to IPv4 addresses, the problem is that for some
inane reason they are commonly notated in
decimal dotted quad fashion, as in
127.0.0.1
, instead of in hexadecimal
dotted quad or, even better, pure hexdecimal notation,
as in 0x7f000001
; unfortunately
IPv6 addresses
are so long that a non-numeric and non-lexicographic
notation is hard to avoid, but even so the standard
could have required leading zeroes in non zero hx
digit quads... The hexadecimal notation for IPv4 is
actually perfectly legitimate and properly written
tools will accept it too (as well as decimal
numbers):
# ping 0x7f000001
PING 0x7f000001 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=99 time=0.033 ms
# ping 2130706433
PING 2130706433 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=0 ttl=99 time=0.106 ms
but not all networking tools are properly
written. Even more importantly, just about no
networking tools have the option of printing
address in hexadecimal, which makes them sort of
useless as the source of a command pipeline.
- 060314
- Interesting results from a comparison of
top end video cards with 256MiB and 512MiB of RAM:
in high resolution (1600x1200 and higher), high
quality (AA,
AF)
modes modern games can use up 300-400MiB of texture
memory, in which case 512MiB cards have a definite
performance advantage.
However at 1024x768 and even at 1280x1024 there is
little difference, around 10% at most. Unfortunately
at max quality none of the three games tested would
use less than 128MiB, which is what my videocard has
got, but then it can't really handle max quality
speedwise either (it is just a value priced
6800LE).
- 060313
- Just read an article on the
CPU costs of shared libraries
which add up to something fairly significant. But the
much larger costs, especially for
badly constructed shared libraries,
are in memory usage.
- 060312d
- Just read an
amazing e-mail message
(thanks to
Digg.com)
from Mark Shuttleworth about delaying the
next Ubuntu release by a few weeks for extra
polishing.
The amazingness is not in the delay but in the
reasons he gives for the delay:
However, in some senses Dapper is a
"first" for us, in that it is the first "enterprise
quality" release of Ubuntu, for which we plan to offer
support for a very long time. I, and others, would
very much like Dapper to stand proud amongst the
traditional enterprise linux releases from Red Hat,
Debian and SUSE as an equal match on quality, support
and presentation. We would like Ubuntu Dapper to be a
release that companies can deploy with confidence,
which will be the focus of certification work from
ISV's and IHV's, and which will bring the benefits of
Debian to a whole new group of users.
There are several aspects of this statement that have
shocked, shocked me :-)
, one is that
Unbuntu obviously wants to compete with the likes of
RedHat and SUSE (commercially that is), and the other
is that bring the benefits of Debian
sort of
implies that Mark Shuttleworth regards
Ubuntu not as a fork but as a variant of Debian.
- 060312c
- Belated article on
IBM using Cell for compute servers.
It is a bit late because a few months ago the same
thing was
demonstrated
at LinuxTag
in June 2005...
- 060312b
- Engaging article on
virtual machine based rootkits
created as a proof of concept by Microsoft and UofM.
As to their undetectability, more or less any
rootkit that modified the OS kernel is also
undetectable like a VM based one, because the OS
kernel in effect provides a VM to applications. The
only reliable way to detect a rootkit is to scan a
disc with a known-good system, for example with a good
copy of one of the many Linux based live CDs.
But most importantly this proof-of-concept
demonstrates how dangerous it may be to have
DRM based on VM technology
because it can be easily misused, as the recent
discoveries on the Sony DRM for CD-ROMs has amply
demonstrated:
"It's a dual use technology. It's got uses
and misuses. Intel has to answer what guarantees it is
prepared to give that home users are safe from
hackers. Not maybes, guarantees".
- 060312
- More speculation from
an article on the PS3 launch
cliffhanger:
The company is known to be aiming for a
September launch, but this may still be an unrealistic
goal.
The market is becoming impatient with
Sony, which has still not officially moved from its
'spring' launch schedule, despite the season's
arrival.
As to the second quote, amusing naivety: some people
count as spring April, May, June (that is, the spring
quarter, or Q2), and in any case a deadline of
specified as an interval conventionally means the
last day of the interval, not the first.
In another article
on the same subject peculiar statement by the head of
SCEE:
Reeves also talked about the fight with
Xbox 360. "Most of the first million people who buy an
Xbox 360 in PAL territories will also buy a PS3," he
predicted.
which sounds very peculiar to me: is there really a
mass market of households prepared to spend around
US$1,000 to buy both consoles? And that on top of one PC
or two? If so, the sales of the game industry will
expand dramatically. My impression is that most households
will buy one or two PCs for general internet access and to
play MMORPGs, and one major console, plus possibly a
portable or small one, like a Nintendo Revolution or
DS or a PSP.
Unless the quote above means that SCEE expects
Xbox 360 and PS3 to target mostly the enthusiasts
with lots of disposable income, the sort of people
that buy US$400 video cards for their PC, not to the
mass market. This may indeed be what he means with
Most of the first million people
above.
- 060309b
- Reading some of the usual speculation as to the
PS3 launch date and price,
which Sony insists will be sometime this spring. Well,
I suspect that sometime this spring will mean
June 30th (the last day that might
still qualify), and they will pull an Xbox 360
trick, with just a small initial run of heavily
subsidised prototypes, and then a long wait for the
real production systems, which will be much cheaper to
manufacture in volume.
As to the price of Xbox 360 and PS3, it is
quite likely that both Microsoft and Sony, in slightly
different ways, hope to make their console so
expensive that households will not be able to afford
the other (if customers buy both, they will split
their game purchases between the two, and the take-up
ratio of both will be too low).
Thus going for a winner-take-all
strategy, probably geographically based (Japan to
PS3, the USA mostly to Xbox 360, the rest of the
world mixed); conversely Nintendo obviously hope that
the Revolution will be so cheap (same or less than a
Sony PSP!) that it will be the second console of
choice for those who buy either Xbox 360 or PS3.
This strategy also makes more sense for Sony,
because the price of Xbox 360 and PS3 is
comparable to that of many low end PCs, and there is
no question that Sony is trying to harm Microsoft's
sales of OS licences as much as possible: low end PCs
are ideal for Microsoft, because the OS licenses they
sell are priced per-unit, not as a percentage of the
sale price, so they make a lot more money when two
US$400 PCs are sold than one US$800 PC is sold.
Interestingly
the price range of the Xbox 360
is high enough (especially when one considers that it
comes without a monitor or a printer) that it overlaps
quite a bit
the price range of low end PCs
of some of Microsoft's largest licensees, like
Dell. Just about the only thing that the Xbox 360
lacks to compete with a low end PC is a port of
MS Office, and of
course
it would not take much for Microsoft to do one;
but for now Microsoft are still holding back from
competing with their own licensees, the mere
possibility being enough for now to concentrate minds
on who is in control.
Which suggests that Sony will not only deliver
GNU/Linux on PS3, but that will quite deliberately
include something like
OpenOffice.org
or
KOffice
as every MS Windows or MS Office license
sale that a PS3 displaces helps Sony
cut the air supply
of
Microsoft.
- 060309
- Was discussing the next-gen consoles, and my
argument is that if Nintendo chooses like they did in
the previous generation, the Revolution will be really
high performance and a lot easier to port PC to then
Xbox 360 or PS3, and there have been
some rumours
that seem to indicate Nintendo are quite wise; in
particular the 256KiB
primary cache and 1MiB
level 2 cache sizes are going to matter far more than
the extra 2 CPUs of the Xbox 360 or 7 SPEs of the
PS3. Sure the extra processing power of the other
other two, especially the PS3, will matter if game
developers completely rethink their game
architectures, but that is not going to happen,
because the obvious way to do it is to make game code
platform specific.
The cache is so important, because in effect
memory is another processing unit, which latencies
that are much higher and throughput much lower than
that of convention CPUs, so as a rule it is memory,
not CPU, that is the bottleneck.
Some of the specs in the rumours above however
look suspicious, like having 512MiB of main RAM
and 256MiB of GPU RAM; but the most
suspicious is the presence of a physics accelerator
chip with 32MiB. It is suspicious because it is quite
unnecessary, and existing physics acceleration chips
are rather power hungry and not very effective.
However if the physics accelerator is just another
PPC core with some onchip
RAM, then it is a good idea, because PPC is pretty
good at doing physics. My idea of a good, cost
effective, physics accelerator for PCs is just a
PCI/PCI-X card with a PPC
core and some RAM on it. No need for
bizarre custom chips
except of course to tell investors stories about
owning intellectual property.
- 060306b
- I have just repeated my earlier test on
JFS performance degradation with time.
I have upgraded my relatively slow 80GB disks to
rather faster 250GB and slightly increased root
partition size to 10GB from 8GB about 2 months ago,
and done a fair bit of upgrading in the meantime, and
here is comparison between reading the whole root
partition, on the same quiescent disc, first as is,
and then after reloading it with
tar
,
which would ensure pretty much optimal layout:
Used vs. new JFS filesystem test
File system |
Repack |
Avg. transfer rate |
used JFS |
12m32s 51s |
10.8MiB/s |
new JFS |
05m53s 50s |
21.3MiB/s |
Over time the filesystem has become twice as slow,
which is not bad, at least compared with
seven times slower for ext3
even if the latter was over a longer period of time
and perhaps more frequent updates.
It is also very notable that on the new 250GB disc
both times are around half those on the older 80GB
discs, I suspect mostly because I chose the new disc
(a Seagate ST3250823A
) to have a
short seek time.
- 060306
- Well, now that dual core CPUs cost little more than
single core ones, intel have announced
the end of HyperThreading
which was a way to do a dual CPU system by sharing
most parts between the two CPUs, and thus allowing
only partial parallelism between them. It worked
fairly decently for what it cost: something like
adding 5% to the complexity of one CPU for something
like a 10-30% gain. Full dual cores add 80-90% to a
single CPU complexity for a 50-90% gain. In both cases
the gain applies only to well written multiple
threaded code.
As Intel leaves HyperThreading behind, game
console manufacturers endorse it in the Xbox 360
and PS3 Cell CPUs. I suspect it will not do much
good, in part because current game structures are hard
to multithread, even if I have some ideas on what kind
of
coarse partitioning
might be done for games.
- 060305
- Interesting news from the game industry, about
Lionhead downsizing
because of unexpectedly low sales apparently due to
shrinking of the PC games market.
However what is interesting is the numbers of people
left and how many projects they are working on:
A spokesperson said the firm is focusing
on two next generation products.
Molyneux has decided to focus on only two
games at one time; one has been in development for a
year and the other is just ramping up. The firm will
also retain a small 'prototyping' team.
Lionhead's staff has therefore been cut
from about 250 to 200, 180 of whom are
developers.
Presumably since one project is being developed and
the other is just ramping up
their team sizes
will be different, but even assuming an equal division
of manpower and a dozen or two developers on
prototyping, that's at least 80-90 developers per next
generation project. Pretty huge.
- 060226c
- As to game I was really delighted to discover that
the source to one of my
favourite games,
Enemy Engaged: Comanche Hokum
has been released
and therefore that splendid game is being updated and
upgraded by
the community
of its users.
- 060226b
- There are some mostly GNU/Linux based that I play
semi-regularly, and they are all online multiplayer
ones, as they allow me to jump in for a match, spend
half an hour, and then continue. Perhaps it is because
everybody else is
playing MMORPGs
(World of Warcraft has now
more than 5 million subscribers by itself, and there
are statistics that show that
the median time
spent playing is 20 hours per month), but there are
very few players online for
UT2004
or Quake 4
even if there are still a few for
Tribes 2.
It is a bit of a pity, because the scarcity of
players means that some game modes or modifications
attract nobody at all; for example in UT2004 almost
only Onslaught
mode has some
players on it, and virtually nobody is playing
UTXMP
whis is a rather complete, polished
Team Fortress
style modification.
Ironically, the MMORPG market is almost entirely
PC platform based, and it could be argued that because
of it the PC platform is dominant again. Considering
the tremendous increase in price for next generation
Microsoft and Sony consoles and games, buying and
installing an MMORPG on an existing PC may seem a
cheap option, even factoring in some months of
fees.
- 060226
- I have discovered recently that the development of
KIAX
proceeds apace and the recent
KIAX 0.8.5
is quite improved even over the version I had
previously mentioned.
- 060219b
- The Inquirer rightly
makes fun of the Firefox programmers for the
memory leaks
in particular the
intentional one
where the browser caches recently visited pages in
their entirety to make it quick to go back to them.
Konqueror and other browsers also have this
feature
which greatly contributes
to the bloat, because caching things just in case is
only worthwhile when one has infinite memory.
Also, I suspect that the cache is not per-tab, as
when a tab gets closed the history relevant to the tab
should be thrown out, but I very much doubt it is;
closing all tabs does not seem to reduce the
memory footprint much in most browsers.
- 060219
- Thanks to a friend I have dicovered the video recordings
of the
Google engEDU talks.
Among these I was quite interested in the one by
Hans Reiser about Reiser4
which was interesting in several ways. For me the
major one was that I quite like his insistence about
handling large nonhierarchical namespaces, search
engine like; however Reiser4 is still quite
hierarchical, unlike for example this proposal for
keyword based file names.
Somewhat related is also remember a quite interesting
dissertation by Robert Stroud on
Naming Issues in the Design of
Transparently Distributed Operating Systems
which concludes that precise relative names scale but don't
work, and precise absolute names work but don't scale, and
therefore fuzzy names are probably best.
- 060212
- As part of a highly unofficial interview with at
least one PS3 game developer on the capabilities of
the new console's GPU and CPUs; in particular,
probably most games will not support 1080p, except via
hardware upscaling, and an
interesting statement
on software (probably means SPE) use, to add to the
massive power of the NVIDIA GPU:
SCEI's Masa Chatani describes PS3
architecture as elegantly simple with outstanding
performance, and developers say they love the
streamlined Open GL environment. But our guide adds:
"Cell is weird and difficult to work with... coding
has progressed with high speeds and paper specs in
mind, it's one of the reasons framerate specs aren't
met yet. We've been anti-aliasing through software
which also means a performance hit, although the
720p upscaling minimises that problem a
bit."
Well, yes various types of postprocessing are indeed
one of the possible uses of the SPes. A bit of a waste
perhaps.
- 060204b
- Just discovered an interesting paper from Intel
about ELF symbol visibility and
ELF dynamic linking performance;
these exist because ELF was designed for exceptional
flexibility in an age in which programs were much
smaller and so were shared librarieas.
I have already mentioned the
contribution of Ulrich Drepper and others
to improve practices with dynamic linking, it is nice
to see other people care.
- 060204
- The
ECC RAM product by Kingston
that I
recently purchased
comes with a very interesting
list of supported motherboards
and this is in essence a list of all motherboards that
support ECC RAM known to Kingston. Which is useful,
because motherboards manufacturers, never mind
resellers, often don't mention whether a motherboard
does ECC. That list seems mostly reliable, even if it
includes some ABIT motherboards, and ABIT technical
support told me none of their motherboards do ECC.
The point here is indeed performing ECC,
as virtually all motherboards support ECC RAM sticks
in the sense of compatibility, accepting them and
ignoring the ECC data. ECC typically depends on two
aspects of the motherboard, whether its memory
controller can do ECC and whether the wiring is such
that it can actually be performed. Usually, but not
always, if the memory controller can do ECC the wiring
is there.
So the question most of the time is which memory
controllers can perform ECC. Memory controllers are
usually part of the north bridge of a motherboard
chipset, except for Athlon 64 and Opteron
motherboards, as the memory controller is part of the
CPU. My current understanding is that among desktop
and workstation class northbridges:
Intel have
tables of their chipset features
which shows support which ones support ECC and which
do It is also interesting to list popular chipsets or
manufacturers that I think do not support ECC:
- Socket A: all non-AMD chipsets.
- Socket 478: Intel 845G, Intel 865 (all
variants), Intel 81x, almost all VIA,
SiS, NVIDIA.
- Socket 775: Intel 915.
- 060203
- So what about the new MacIntel systems? Well, they
show why ever Apple decided to switch from PowerPC to
IA32: the
Core Duo
CPUs are just very good revisions of the classic
Pentium Pro design,
delivering pretty good performance at very low power
consumption (around 25W); while PowerPC is still
competitive with the Pentium 4, the market, and
in particular Apple's, is moving ever more towards
mobile or small form factor computers (more than half
of all computers sold are laptops nowadays, and I
guess that for Apple the percentage is much higher).
The new
Intel Core Duos
are fairly impressive even if they are not AMD64
compatible, but the next generation will be. With that
Intel will have largely caught up with AMD in terms of
performance and features; and when they add
virtualization (which contrarily to some reports is
not yet implemented as part of Core) they will
have an extra feature.
But still the most interesting development is that
in effect it is now Intel that in some market segments
is attacking AMD's product lineup from below, offering
lower cost alternatives, as in the case of the much
less expensive Pentium D 820 vs. the
Athlon X2 3800+.
It is quite interesting that Intel seems
determined to continue being the cheaper alternative
to the Athlon 64 X2 with the Core Duo, as the
prices per chip are
reported
to be around US$240.
- 060201
- Discussing what use can be a Cell style architecture
for simulations, and the previously mentioned idea of
coarse partition
of the processing, I was asked for some examples other
than textures/lighmaps and characters. Well, several
years ago I met the people who were aiming to do a
ray traced
game called Vigilance
(the demo is still
available
and despite not becoming famous and not being quite
finished it got even some fairly
fairly positive review).
That was of course a bit too ambitious for the
time (partly as a result of overambitious goals the
game was released even in a not quite really polished
state), so they ended up with static light sources and
ray traced lights on fixed geometry.
But some members of the Vigilance
team have kept working at the technology, and a
credible if small dynamic ray tracing system
was already sort of feasible on a 2GHz PC. Others
have developed some
dynamically raytraced gamelets.
Now the beauty of ray tracing is that it is the poster
application for non shared memory SMP/NUMA/... systems,
as it partitions really well,
for example it was used to demo
Transputer
based machines extensively (then, far from real time).
Part of the attraction of something like PS3 for
graphics is that it can be used to implement graphics
techniques that are not just polygon/texture based,
which so far have utterly dominated if only because
they are the only ones for which cheap hardware
accelerators are available.
There have been rumours that the PS3 originally
was to be, or could have been, a two Cell machine,
with fully software synthesized graphics. However in
the end Sony apparently preferred a classic NVIDIA GPU
to the second Cell; the reasons rumoured have been
that even two Cells could not deliver high enough
software graphics performance.
Perhaps, but I suspect that the real reasons
probably were providing a familiar PC-like graphics
system for first wave games (PC-style game programmers
are heavily invested in PC-like graphics tech), and
perhaps even more importantly the anticipated
difficulty to
manufacture enough Cells for launch
never mind if each PS3 had two of them.
Whatever, every PS3 will have 7 spare SPEs and
256KiB memory areas, and these should be put to good
uses, among them for example stuff that is expensive
or difficult to do on the NVIDIA graphics chip.
Finally, another possible use for one SPE: in game
streaming video, off that big Blu-Ray disc. Games like
GTA III have demonstrated how entertaining in-game
audio can be, if done well. Well, Sony own a large
film library at MGM/UA etc., and they have already
released several movies for the PSP UMD. However of
course in-game audio is less distracting than video.
We shall see...
- 060130
- Slower, less power hungry, hard drive spinup is
an important parameter,
even if it is hard to find information about it. But I
was delighted recently to see that Western Digital
have added such an option
to make it more likely that external USB/FW2 boxes
will work with their drives. But I was astonished to
see that this is for a 2.5" drive, and 2.5" drives are
pretty low power already. But then many external
USB/FW2 boxes don't have a power supply and draw power
from the bus, so it is more understandable.
Western Digital also have had for a while, like
other manufacturers, the
option to delay spin-up
until from power-on to when a command is received;
this is useful to prevent all drives in hard drive
arrays to spin-up at the same time, staggering instead
their coming online.
But I wonder why both mechanisms are not replaced
by a much simpler automatic feature: monitoring the
12V power rail and when it starts going down, slowing
down the spin-up rate. In this way perhaps spin-up will
not be as fast if the power supply is marginal, but
far more reliable.
- 060129c
- As to the fatal issue of ECC and RAM, the same smart
friend who pointed out
an additional problem with RAID5
has observed that the main
advantage
of
no using ECC with RAM also applies to lack of security
and security auditing measures: just as a system
without ECC for RAM appears more reliable than a
system with, because a lot less problems get reported,
a system without security measures appears more secure
than one with, because a lot of less security problems
get discovered.
As some corporate data center guy once famously
said, As far as I know we never had an undetected
error
.
- 060129b
- Now that I remember, a smart friend pointed out
another reason why
RAID5 is a bad idea
especially for writing: given that every write to a
single logical blocks involves multiple reads and
writes, this considerably worsens the assumptions
about wear and tear on the drives involved. Just say
no to RAID5.
- 060129
- Interesting news that in a number of benchmarks
WINE under Linux outpeforms MS Windows
at running WIN32 applications. This is not unexpected
(except for memory and swap, Linux is fairly
efficient), but I was surprised that Quake 3
reportedly run nearly as fast as under MS Windows.
This probably is because Quake 3 uses OpenGL
even in its WIN32 port, and then it is not difficult
for WINE
to just wrap WIN32 OpenGL calls to native Linux OpenGL
calls, which with the right drivers can be fully
accelerated. For DirectX games though I think that
Cedega
(which is a WINE derivative) does pretty well too.
But then I think that it is much better to have
native GNU/Linux versions of games, of which there are
quite a few already: providing an alternative
implementation of the WIN32 and DirectX platforms just
adds to their value. This was the OS/2 curse: it would
run WIN16 applications better than MS Windows 3, and
so well that there was no point for developers to
target the native OS/2 APIs, which therefore lost
relevance, and eventually this extended to OS/2
too.
- 060128c
- As to games, a fascinating or terrifying graph just
discovered about
estimated total number of people playing MMORPGs,
where the numbers are around 250,000 at the beginning
in July 1999 to around 5,000,0000 in July 2005.
The graph and the numbers are truly impressive,
and while they surely are marvelous news for the
MMORPG industry, they must be quite terrifying to
those that develop and sell other types of games.
Each of those 5,000,000 people is paying a monthly
fee, and each quarter pays around the whole price of a
new game, and spends a lot of hours in their MMORPGs,
and this means that they have less money and time to
play traditional single or multiplayer games.
Given this, it is far from surprising that as
John Carmack says:
The PC market is getting really, really
torched. Todd mentioned a statistic: last year saw
the PC make half the gross revenue of three years
ago.
as those 5,000,000 online players, which are usually
the most committed game players, the mainstay of the
PC game industry, are putting something like US$600m
in fees into MMORPGs every year, and thats a lot of
money they are not spending on traditional PC games,
never mind the time it takes; and time matters, as
gaming time is a finite and scarce resource, because
those that have plenty of time to play games usually
don't have the money to pay for them, and those that
have plewnty of money to buy them usually don't have
as much time to play them.
No surprise that it looks like there is a lot of
piracy (and there is some) in the PC game market: the
MMORPG industry is pirating a lot of customers and
recurrent sales. No surprise that recent game
consoles, whether desktop or portable, emphasize
networking so much.
- 060128b
- Chatting with someone about my game technology
views, I mentioned a note in a recent issue of the
Edge magazine
that game console, the latest and greatest of which
have multihreaded multiple CPUs, are meant as vehicles
to sell some
signature
games.
Indeed these games are so important that a good
way to assess the architecture of a game console is to
ask if it fits well the needs of the most important
ones.
Now the most important signature games for Sony
consoles has been the
Gran Turismo series
and this leads to a some speculation about the PS3
architecture, 1 CPU and 7 independent CPUs with a
small dedicated memory each: that it was designed to
run Gran Turismo particularly, dedicating each CPU to
a different car in the race.
In other words still
coarsely partitioning the load
but not by processing phase or by global effect (like
lights) but by actor in the simulation. Now what could
be so time consuming and at the same time localized to
each actor that might warrant a CPU dedicated to it?
Well, as usual graphics, in particular I think
lighting effects and texturing, if not generated
dynamically, at least updated on the fly.
Partitioning by actor (where perhaps the level
itself can be considered an actor) fits well also with
the idea that in most games, which are simulations,
actors are the focus of action, and there are either a
few, or very many: a few for example in single person
or multiperson fighting games, and very many in
strategy games or massive online games. With a few
each can be (semi) permanently loaded onto a separate
CPU, with very many the very many can be processed by
subsets each subset onto a distinct CPU.
This style fits well other Sony classic game
series, like sports games and dojo fighting games.
- 060128
- I have recently been
looking at games
and the main reason is that it is what pushes
technology at least for smaller systems. That I had to
expand my RAM
was mostly due to the demands of recent high end games.
But game programmers constantly push the
boundaries of technology, resulting in things like
graphics cards that cost much like a PC, and that are
in effect
massive array processors.
One of the reasons is that there are two types of games:
- Rule based games, where
victory
mostly comes from exploiting the rules. For
example chess, sudoku, or board games. The skill
needed to play them is mostly intellectual, and
usually more strategic than tactical.
- Behaviour based games, where
victory
mostly comes from dexterity performing actions. For
example football, capture-the-flag, or
Mikado sticks.
Of course the boundaries are not totally sharp, as
there are rules in behaviour based games, and some
elements of behaviour in many rule based ones (for
example chess and Go
are symbolic
simulations of war).
What matters to technology is that behaviour games
are essentially simulations, they are based on a
let's pretend
logic.
Many, if not most, computer games are behaviour
games, and require simulating some virtual world,
whether realistic or imaginary, even if some are
transpositions of board games. Behaviours inside
simulations are very expensive on computers, in part
because our senses have amazingly high resolutions,
and in part because the analog world is not that
compatible with digital logic.
So, in the pursuit of better simulations, technology
has to be pushed hard, and so it will have to be for a
long time and, crucially, PCs are upgradeable, unlike
game consoles.
Now, what is interesting to me particularly
is that game are pushing technology towards
parallel architectures.
This push has been going on for quite a while, and not
just recently about SMP machines. For example the very
use of autonomous and very powerful array processors
with large memories, which is what recent graphics
cards are is an example. But also many
massive online games
are increasingly based on simulations that run on
large parallel clusters (also
1,
2).
- 060127b
- Well, I have been rather skeptical about the merits
of RAID5 for a while, and I was eventually fully
persuaded by the arguments
(1,
2)
of the
BAARF campaign
that RAID1+0 (striping mirror sets) is overall a lot
better.
Well, I was chatting with some smart friends about
this and they decided to try to switch some of their
storage to RAD10 from RAID5. Now that was a backup
area, so not exactly the most suitable for RAID5, but
still backup rates improved from 450 megabits/s to
5,000 megabits/s.
Since the discs used are capable of around 300-400
megabits/s sustained, that means that in
write-intensive usage RAID5 was delivering no
striping advantage, while RAID10 did deliver the full
advantage of striping across the mirror pairs.
- 060127
- Well, eventually I sold out and bought
1GiB of PC3200 DDR
RAM for my ancient PC. And yes, this is a kind of
if you can't beat them join them
situation, for someone like me that has been pointing
out that my previous 512MiB
should have been enough, and and that most Linux
developers have 1GiB and more of RAM and could not
care less about
memory and
swapping
inefficiencies because they never arise on their
systems.
Well, that was quite right: since I upgraded my
system no longer swaps, and works a lot better. My
deduction is therefore that the no longer poor
Linux kernel developers have at least 1GiB RAM
and that Linux is simply unsuitable for any situation
where virtual memory exceeds real memory, a conclusion
that I was reluctant to draw.
My excuse is that I want to try out some recent
games that simply don't fit in 512MiB: for example
Quake 4 requires 600MiB, and F.E.A.R. rather
more; and games are real time programs and simply
don't work well with paging, even if it is done well,
and it is even more pointless than for other programs
to argue that they should be optimized for memory
usage.
Of course, even if my current (temporary)
motherboard does not take advantage, I bought
ECC capable RAM,
simply because it costs only a bit more than RAM
without ECC support (9 instead of 8 chips per side),
and sooner or later I will upgrade to a motherboard
with ECC support too.
- 060113
- Just found a photo of a high performance
WD
drive with a clear plastic cover, which shows it has
2.5" platters
as
previously mentioned.
- 060111
- Thanks to a friend for sending me a link to
extensive online test of various compression programs
which nicely adds to
my own decompression tests.
The common unavoidable conclusions are that
lzop
is by far the fastest,
bzip2
by far and away the slowest, and
gzip
is sort of average.
- 060106
- Well, I am always interested in how technology develops,
and in particular in SMP and power consumption, and it
is interesting to see in this
Athlon 64 X2 3800+ review
a nice comparison of the power consumed by some recent single
dual core CPUs (a rarely mentioned issue, but the review is
on XBitLabs, which is a particularly good and informative site).
It was interesting but not equally pleasant to see that
a few current CPUs draw (and dissipate) more than 130W. Wow!
It is basically a pretty high output lamp under that heatsink
:-)
.
Also, how good XBitLabs are as to technical detail
is also shown by this interesting comment:
In fact, they could have achieved even
higher power saving efficiency if the cores could
turn to economy mode independently. However, it
looks like this feature will only be implemented in
the dual-core processors designed for the mobile
segment.
- 060105c
- I was chatting recently about trends in game
development: in particular that games, especially PC
games, but not just, tend to be mostly
mods
, even those that actually sell
themselves as original games. In particular many games
are mods based of
Unreal Tournament 2003/2004
or of Quake 3 or
Doom 3.
Indeed an argument can be made that Unreal
Tournament 2003/2004 or Quake 3 or Doom 3 are mods
too, of themselves. What is happening is that famous
game engines like those developed by
Epic Games
or
id Software
get a lot of attention because of their signature
games, but these companies often make more money by
licensing the engines than from the signature games
themselves, so in a sense the signature games are
promotional mods
for the engine.
Large multistudio game companies have a similar
position, developing base engines and then many games
which are mods for those engines, with slightly
different gameplay.
Now, what's the deal for small and middling
independent studios? They can license the well known
engines, or the unknown ones, or they can
roll their own.
Licensing a well known engine is very
expensive, in particular because it involves a large
upfront fee and then royalties. This is because basing
a game on a well known engine is a selling point in
itself, as the engine has cachet
that adds to the marketing of the game.
Licensing an unknown engine means still having a
bit more of a struggle for integration, and then
owning in effect only art assets and scripts.
Developing a custom engine adds
cachet
to the company, as
then it controls almost completely its own
intellectual property. But then the really valuable
properties are not the technology and not even the art
assets, but the brand names, and those usually are
controlled by the publishers anyhow...
However developing a custom engine is a lot less
hard than people think; in part because one can be
clever (not that many try), and in part because there
is virtually no proprietary technology, all technology
one needs being in books and papers (most games of the
same generation look very much alike because game developers
download the same SIGGRAPH papers :-)
),
as the last thing that game studios can afford to do
is to fund original reseach.
Developing a custom engine also has a very
important marketing effect: one can offer to
customers, whether retail or other studios, the
ability to developer their own freeware or commercial
mods, that can significantly increase sales of the
signature game, even if relatively few people play
it. Many may buy it simply because they want to run a
particular mod, as it happened with
Counterstrike
and
Half-Life.
This means that the useful shelf life of a game,
usually pretty short (months) gets significantly
extended, and that can rather improve the economics of
the situation.
- 060105b
- There is a far more comprehensive interview with
John Carmack starting page 62 of the
January 2006 issue of
PC Gamer
which is really quite interesting. Some highlights and
comments:
There is an argument I get into with people
every year. Every generation, someone comes up and
says something like procedural and synthetic
textures and geometry are going to be the hot
new thing
. I've heard it for the past three
console generations -- it's not been true and it's
never going to be true this generation too. It's
because management of massive data sets is always
the better thing to do
- I massively disagree with the last assertion, if
not with the forecast. Sure, I expect most current
generation games to be about massive static data sets,
(like Carmack's
megatextures for the Doom 3 engine)
not dynamically generated ones. But the reason is
not that static massive data is
better
, but
just more familiar.
Because most games programmers are at heart PC
programmers, and one can always expand a PC until
it handles massive static data sets, which are
more familia to program for. Dynamic content, as
in the
demo scene,
and as notably exemplified in
.kkrieger,
requires a different mindset, a bit like parallel
programming (but parallelo programming requires
more than a different mindset).
The familiarity of massive static data sets
means that there is good hardware support, in the
form of graphics chips, for the most traditional
and familiar form (triangle meshes and static
textures) and this reinforces the preference.
Several technologies have been thrown in the
dustbin of history, like voxels and ray tracing,
because they require different thinking, and are
not supported by hardware accelerators. Never mind
things like the
ellipsoid
based rendering used in
Ecstatica over ten years ago.
Sure, PS2 and its EE CPU and GPU did have
some primitives that meant is was particularly
suited to generative programming, and in
particular with
NURBS
or similar stuff, but very few game programmers
used that, and just did straight PC-style triangle
static mesh stuff for which the PS2 was poorly
suited (lots of vector power, slow small memory).
Well, I suspect that a few games that have
been PS2 only by PS2-culture programmers, like
Gran Turismo 4
actually do use all the power of PS2, but thats
very rare inded.
I have a quote here from Valve's CEO, Gabe
Newell, talking about the next generation of
processors and consoles, and what they mean to
gaming. He says that the problems of getting
things running on multicore processors are not
solved. We have doctoral theses but no
real-world applications
. Do you agree with
him?
The difference between theoretical performance
and real-world performance on the CPU level is
growing fast. ... but the new
generations make it much, much worse. ...
when you do a straighforward development process
on them, they're significantly slower than a
modern high-end PC. .... The graphics
systems are much better than that though. Graphics
have an inherent natural parallelism. The
capabilities of the Xbox 360 and of the
PlayStation 3 are really good on the
graphics side -- although, not any head or
shoulders above any PC stuff that you can buy at
a higher price point.
- Exactly! If one looks at these architectures as
if they were PCs, then they only perform well in
the aspects that are most PC like. Hot bits:
straighforward development process
means
familiar PC mindset
; slower than a
modern high-end PC
means that what matters
is the sort of immense PC a millionaire like him
can afford; really good on the graphics
side
but not better than any PC stuff that
you can buy at a higher price point
reinforces
the notion that the mindset if about PC
programming and PC graphics accelerators which are
all about what is familiar, like triangle
meshes.
... probably all of our gameplay
development and testing will be done on the
Xbox 360. It's a really sweet development
system.
- Exactly again! It is the one that looks most
like a PC development system, with something like
Visual Studio etc.; familiarity again. What a sad
statement though from someone who was doing games
on a NeXT cube (on which Doom was developed). The
grip of the
Microsoft cultural hegemony...
The PC market
is getting really, really torched. Todd
mentioned a statistic: last year saw the PC make
half the gross revenue of three years
ago.
- But while piracy is surely a significant issue,
hints like this are misleading. The economic climate
in three years has changed a lot, and the gross of
plenty of other things has gone down a lot. For
example, computer science departments have been
sacking a lot of people because student numbers are
way, way down (after having exploded), and this has
not been because course notes have been pirated.
Also, sales of recorded music have gone down,
thanks to a combination of lower disposable
incomes and price increases by music publishers.
Indeed, as to disposable incomes, three years
ago geeky people who buy games were a lot more
prosperous than today. But employee compensation,
and not just for geeks, has been
going down by a few percent
a year for the past few years, and the tech
industry, in which many hardcore gamers used to
work, has shrunk significantly (and while the
income of the wealthy has been going robustly up,
they are too few, and in any case they don't much
play games).
A fall some percent a year in average earnings
is no laughing matter, and it would be astonishing
if it did not result in much lower sales of
luxuries like games especially those that require
high end PCs, when consoles are rather cheaper.
- 060105
- Interesting interviews to the usual John
Carmack on game engine technology and
multiple CPUs and mobile gaming.
The
interview with the Grauniad
says that the occasion is the PR for the launch of
the
Doom RPG version for mobile phones
game. Just like another
interview with BuzzScope
(mentioned on
Blue's News)
is also mostly about his new Doom RPG for mobile phones.
A comparison among PC, PS3 and Xbox 360 is
also made, and unsurprisingly he reckons that PS3 is
a bit faster in principle, but Xbox 360 has more
easily usable power.
As to mobile games, Carmack says he finds them
interesting, and the technology is rapidly improving.
He reckons that
J2ME
based games lose out a lot in in performance compared
to those based on
BREW,
a native code environment (which however has security
implications, and is most suitable with devices with
two CPUs, one dedicated to the networking, and one to
the user interface aspects). I also found a nice article
of a few years ago
comparing BREW and J2ME
and a good
set of BREW tutorials
including a BREW/J2ME portability guide.
- 060102
- Discovered a nice list of
favourite command line tools
with which I mostly agree. Some notes:
- One does not need to
run X; it lets me have
multiple xterms on the screen at once.
as
there is a Curses based
windowing envirnment,
TWIN,
which is a bit like
screen
but with multiple resizable windows.
- For web browsing I also like the variants of
Links.
It is also described in
a nice comparison of text mode GNU/Linux browsers.
- I wish that I could replace
BASH
with
Zsh
as I think the latter is written rather better (I
don't care much about clever autocompletion, and
there is a very powerful autocompletion scheme for
recent versions of BASH). Unfortunately this is
impratical as there are several BASH specific
scripts in the average GNU/Linux distribution.
- I like
wget
too, but also
pavuk
and
cURL.
- Among editors I use Emacs/XEmacs too, but I also
like
Vim
as it starts quicker than Emacs/XEmacs, and it is
more convenient in some cases (where its line
orientation is of advantage, mainly), and it has
many more colorization schemes, which
double as minimal syntax checking.
- As a command line interactive FTP client I
particularly like
lftp.
- I don't normally use a file manager/explorer,
but sometimes the GNU
Midnight Commander
is quite useful, in particular because it can
access remote directories, and can open most
archive files.
- 060101b
- The cultural hegemony
(1,
2)
of the Microsoft way of doing things is ever
expanding: the
Elektra project
is about switching Linux to a clone of the Microsoft
Registry for configuration:
About
Elektra provides a universal and secure
framework to store configuration parameters in a
hierarchical key-value pair mechanism, instead of
each program using its own text configuration
files. This allows any program to read and save its
configuration with a consistent API, and allows them
to be aware of other applications' configurations,
permitting easy application integration. While
architecturally similar to other OS registries,
Elektra does not have most of the problems found in
those implementations.
Backends
A great feature of elektra is that you can
implement your own Backend with a set of
functions. So it is possible to have the database in
the way you want. Filesys is ready for use,
Ini-Style and Berkleydb are nearly finished and some
other are planned.
Never mind ditching several decades of proven,
consistent UNIX practice that configuration files
should be text files and organized as tables, for
example like /etc/passwd
, so that they
can be easily edited and processed by command
pipelines.
After all many configuration files are still text
but in Microsoft .ini
format, which is
hard to process in the conventional UNIX way, and thus
switching to something like the Microsoft registry is
bound to be an improvement.
But then cluelessness is rampant, and one needs
only to look at the several status files under
/proc
, or the output of several popular
programs, to see that many Linux kernel developers
just don't get the pretty good
UNIX way
of doing things.
- 060101
- Looking at hard drive specifications for checking things
like number of platters, maximum seek time or
peak spin-up current drawn,
and it is not that easy to find them. My impression is
that sooner or later these specifications will stop
being published, as the overwhelming majority of
people whose purchase hard drives does not even know
that they exist, never mind that they matter; and this
will mean that such specifications will get worse.
A similar phenomenon has happened for another
feature that I care about, the availability of
ECC
for RAM on
motherboards.
It is difficult to find information as to whether
or not a particular desktop motherboard supports ECC
for RAM, never mind to find one that does support it.
It is indeed safe to assume that if there is no
mention of ECC, the motherboard does not support it.
What is particularly annoying is that ECC for RAM
is not just a very important feature, but that also it
is damn easy to add to a cheapset's memory controller
for almost free. Indeed many past chipsets have had
ECC for RAM supported, and then motherboards using
them did not.
The reason why ECC for RAM is not supported is
that most buyers don't understand how important it is,
and that desktops without ECC for RAM appear to work;
indeed they appear to work better than those
without, as the latter never stop working because a
memory check has failed, as memory errors usually just
corrupt data, or can be easily confused with software
issues, and those are hard to noticed without ECC
(which is precisely the reason why ECC is so
important!).
It should be perhaps the task of reviewers to
point out the important of ECC for RAM, especially
given large RAM sizes prevalent nowadays, but they
don't do that, and they concentrate on superficial
features like layout. But then they do the same for
GNU/Linux
distributions, which are mainly rated as to ease of
installation and graphic glitziness, rather than long
term maintainability and robustness.