This document contains only my personal opinions and calls of judgement, and where any comment is made as to the quality of anybody's work, the comment is an opinion, in my judgement.
[file this blog page at: tic del.icio.us Technorati]
There is a wide variety worldwide of electrical sockets and standards, and for datacenters the use of "kettle lead" IEC-13 and IEC-14 is common. I have realized that they are pretty good for the home too: they are compact, robust, and also country-independent, and easy to chain (within limits!), and many things use it natively, in particular PSUs for PCs, monitor, laser printers, UPSes. For example a small IEC-13 to IEC-14 PDU or similarly a cable splitter or for a laptop a very convenient C14 to C5 (clover leaf) cable.
Usually C13-C14 cable, PDUs and other products cost a bit more than those with national-type sockets and plugs, but they are also usually rather better built (usual fakes excepted).
The ThinkPad E495 Lenovo ThinkPad E495 that I like quite a bit has somewhat limited battery capacity, but that largely does not matter to me as I have been using it mostly on a desk at home or elsewhere, permanently connected to main power, but when standalone it is important to minimize power consumption.
The usual big problem with power consumption is JavasScript-infected web pages in browsers, but I found another two somewhat surprising issues:
There has been a debate among Linux kernel developers about the complications of exporting Btrfs filetrees via NFS, which centers on the use in Btrfs of having subvolumes as independently mountable filetrees and giving each subvolume a different device-id accordingly.
The debate has been raised by the previous maintainer of the excellent MD RAID subsystem, who has demonstrated wisdom and knowledge, but not when making this point:
providing as-unique-as-practical i-node numbers across the whole filesystem, and deprecating the internal use of different device numbers.
That point seems based on two gross misunderstandings about UNIX filesystem semantics, and they are common: that a filesystem instance is the same as the block device that contains it, and that there should be only one filesystem root in a filesystem instance. But there are no such restriction in UNIX filesystem semantics: the only condition is that the combination of device-id and inumber be unique (that is implied by the the condition that device-ids be unique systemwide).
Note: it is theoretically possible to have
multiple filesystem instances inside a block device at
non-overlapping offsets, not just to have multiple filesystem roots
in a single filesystem instance. ZFS also allows multiple filesystem
roots and in theory (but it is little known) the ancient and
excellent JFS also supports them
(filesets
),
even if this functionality is currently unused under Linux. Also
both Btrfs and ZFS (and the still experimental bcachefs)
allow a filesystem instance and each of its files to span multiple
block devices, so the device-id of an i-node need cannot identify a
single block device.
Btrfs respects that condition: a Btrfs formatted block device can contain many distinct mountable filesystem roots, each with its own unique device-id, and those filesystems are mounted by default, but that is both just an option and entirely reasonable, here is an example:
# btrfs sub create /mnt/sda7/s1 Create subvolume '/mnt/sda7/s1' # btrfs sub create /mnt/sda7/s2 Create subvolume '/mnt/sda7/s2'
# touch /mnt/sda7/{EXAMPLE,{s1,s2}/EXAMPLE} # stat -c '%D %i %m %n' /mnt/sda7/{EXAMPLE,{s1,s2}/EXAMPLE} 38 257 /mnt/sda7 /mnt/sda7/EXAMPLE 4c 257 /mnt/sda7/s1 /mnt/sda7/s1/EXAMPLE 5a 258 /mnt/sda7/s2 /mnt/sda7/s2/EXAMPLE
Note: regardless of the type of filesystem, and whether an instance of it has a single or multiple roots, applications have always had to check both device-id and i-node number to establish whether two i-nodes are identically the same. The situation has been complicated because applications have had to be modified to take into account that Linux allows mounting a filesystem root on more than one mount-point, and Linux considers each mount-point as a separate device even if they have the same device-ids, and here stat reports the wrong mount-point too:
# mount /dev/sda6 /mnt/tmp # mount /dev/sda6 /mnt/tmp2 # stat -c '%D %i %m %n' /mnt/tmp{,2}/etc/. 10305 698304 / /mnt/tmp/etc/. 10305 698304 / /mnt/tmp2/etc/.
# ln /etc/issue /mnt/tmp/etc/issue2 ln: failed to create hard link '/mnt/tmp/etc/issue2' => '/etc/issue': Invalid cross-device link # ln /etc/issue /mnt/tmp2/etc/issue2 ln: failed to create hard link '/mnt/tmp2/etc/issue2' => '/etc/issue': Invalid cross-device link # ln /mnt/tmp/etc/issue /mnt/tmp2/etc/issue2 ln: failed to create hard link '/mnt/tmp2/etc/issue2' => '/mnt/tmp/etc/issue': Invalid cross-device link
Note: Linux and Btrfs are also weird in that Btrfs always reports 0 as the number of maximum i-nodes and used i-nodes, and Linux allows it, which also requires applications to be modified.
# df -i /mnt/sda7/{EXAMPLE,{s1,s2}/EXAMPLE} Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 0 0 0 - /mnt/sda7 - 0 0 0 - /mnt/sda7/s1 - 0 0 0 - /mnt/sda7/s2
If there is a flaw with the Btrfs scheme is that the filesystems that are mounted by default are not registered in /etc/fstab or /proc/mounts but that is not essential. Therefore this further point is grossly misguided:
Specially, the expectation that each object in any filesystem can be uniquely identified by a 64bit i-node number. btrfs provides functionality which needs more than 64bits. So it simple does not fit. btrfs currently fudges with device numbers to hide the problem.
The rest of the debate makes it clear that it arises from the wish of exporting via NFS (that can handle only a sigle mount-point per export) what is mistakenly thought of as a single filesystem root, including all the other sub-filesystems (subvolumes, snapshots) mounted under it, as if they were not distinct filesystems. Well, that is simply wrong, just do not do it.
Note: my preference is to not mount by default a Btrfs filesystem instance top directory, to use only subvolumes, and to mount explicitly using the subvol= mount option the subvolumes that need mounting, and thus each has a separate entry in the NFS exports list.
There is no need to force the UNIX design that allows multiple filesystems roots per filesystem instance to be changed just to make it easier to do a recursive NFS mount without listing every filesystem root involved (and regardless there is the crossmnt option for most cases).
Note: my guess is that the gross misunderstandings concerning filesystem roots may have an origin in the misguided output of findmnt that lists them by default with a special square-bracket syntax as if they were somehow subdirectories instead of independent filesystem roots.
I was reading a discussion about auditing and there was some debate
as to remote auditing of virtual machines on cloud
platforms, and it seemed to me quite pointless
because it cannot be done: since the host server
has full access into each virtual machine
the host server has to be audited at the same time as its guest
virtual machines, which is quite impractical. There are now some
very complex technical means to prevent the host from reading
guest virtual memory, by having it encrypted and decrypted by
the CPU, with keys sent to the CPU encrypted with a well-known
public key of the CPU manufacturer, but they merely add the CPU
manufacturer to the issue, even if that can help quite a bit (if
the backgrounds of the CPU manufacturer and hosting business
were completely independent of each other).
Renting a remote physical system, while being much less complex and cheaper than renting virtual machines, has the same auditing issue: as long as the physical host is provided by and under the control of someone else it cannot be meaningfully audited.
Colocation is much better as sending a sealed server to a third party computing center a lot of potential risks are avoided (remote-hands hardware maintenance becomes impossible but this can be mostly solved with enough spare servers).
The Bank of England has been looking into the risks of off-premises bank servers but they seem to think that the main issue is the availability terms and conditions rather than auditability, and that tight contractual arrangements with the hosting companies could substitute for actual control of the physical hardware:
Cloud computing providers to the financial sector can be "secretive", and regulators need to act to avoid banks' reliance on a handful of outside firms becoming a threat to financial stability, the Bank of England said on Tuesday. [...] The BoE said cloud computing could sometimes be more reliable than banks hosting all their servers themselves. But big providers could dictate terms and conditions - as well as prices - to key financial firms.
"That concentrated power on terms can manifest itself in the form of secrecy, opacity, not providing customers with the sort of information they need to monitor the risk in the service," BoE Governor Andrew Bailey told a news conference. "We have seen some of that going on."
A comment from a data center oriented site focuses on the availability aspect of auditability:
Other regulators are also concerned about the concentration of the financial sector in just a few cloud companies' hands. In 2019, the US Federal Reserve conducted a formal examination of an AWS data center in Virginia.
The examiners were concerned that an outage or security vulnerability would take out Goldman Sachs, Capital One, Nasdaq, and payments company Stripe, among others.
While another commenter hints somewhat obliquely at the data aspect:
leading technology resource supplier Xperience, which says that legal services businesses need to ensure they are discharging their regulatory obligations in light of the ‘opacity’ of cloud providers. [...]
Iain O’Kane, managing director of Xperience, said: “In the pandemic we saw many organisations seek out the logistical advantages in cloud services. But the Bank of England has made plain the risks apparent when storing data with companies whose lack of transparency may be in conflict with their regulatory requirements. The risk management issue is further compounded by the alleged politically motivated attack on high profile international technology companies”.
As to data the government of the PRC requires the physical servers to be located inside their jurisdiction:
The national standards require companies procurement and use of encryption products and services to be preapproved by the Chinese government for networks classified as level 2 or above. The standards further require companies (including Chinese affiliates of foreign companies) to set up their cloud infrastructure, including servers, virtualized networks, software, and information systems, in China. Such cloud infrastructures are subject to testing and evaluation by the Chinese government.
The data sovereignty
laws of the USA and of many
other national governments have similar requirements, with the
implication that data centers located in other countries are risky
in that they are subject to access by the governments of those
countries. That covers the concerns of states as to ensuring that
the data of local businesses is accessible to their own security and
surveillance services and not those of other states, but individual
businesses with auditability requirements might (or should) have
additional concerns.
There is an additional detail that I did not mention as to authentication tokens:
second factorsthe loss of any one factor will prevent access, just like the loss of a physical key will prevent access to a building.
useds(term invented by RMS) the service providers have small incentives to spend money to help them recover access to their accounts. One should register at least three different second factors.
Having multiple authentication factors for the same account is quite important in general, whether the implicit linking of accounts if one uses the same access token for several accounts matters depends on the situation. For example if they are online shopping accounts they are likely already implicitly linked by using the same address and credit card numbers.
So lithium-ion battery like many battery types suffer from enervation, as they become weaker and weaker with use, in particular as each recharge cycle damages a bit the chemistry of the battery. Most Lithium-ion batteries are rated for at most 300-500 full recharge cycles.
This is especially important with laptops and other devices with internal batteries that are not user-replaceable, and with devices that are discountinued, as their replacement batteries often become unavailable.
I bought in June 2020 a nice Lenovo ThinkPd E495 laptop with an internal 45,000mWh lithium-ion battery, and these have been the capacity readings at the end of each month since:
Date | Maximum capacity | Percent of nominal |
---|---|---|
2020-06-28 | 44,970mWh | 99.93% |
2020-07-28 | 44,960mWh | 99.91% |
2020-08-28 | 44,310mWh | 98.47% |
2020-09-28 | 42,940mWh | 95.42% |
2020-10-28 | 42,500mWh | 94.44% |
2020-11-28 | 40,870mWh | 90.82% |
2020-12-28 | 40,850mWh | 90.78% |
2021-01-28 | 38,770mWh | 86.16% |
2021-02-28 | 38,770mWh | 86.16% |
2021-03-28 | 38,770mWh | 86.16% |
2021-04-28 | 38,770mWh | 86.16% |
2021-05-28 | 38,770mWh | 86.16% |
2021-06-28 | 38,770mWh | 86.16% |
So the maximum capacity was falling quite rapidly in the first 6 months, and then it stopped dropping, because:
Note: just to be sure I did change the thresholds to ensure the battery would charge to full capacity, and it did charge to 38,770mWh after these 12 months.
It surprised me how very effective partial charging is at slowing down the enervation of the battery. The relevant Linux settings are:
# grep -H . /sys/class/power_supply/BAT0/charge_* /sys/class/power_supply/BAT0/charge_start_threshold:50 /sys/class/power_supply/BAT0/charge_stop_threshold:70
Note: the recommended settings for the Thinkpad (and they apply in general to all lithium-ion batteries) are:
Battery longevity is affected by age, the number of charge cycles, amount of time at full charge, and high temperature.
For maximum lifespan when rarely using the battery, set Custom charge thresholds to start charging at 40% capacity and stop at 50%, and keep the ThinkPad cool. [...]
If the battery is used somewhat frequently, set the start threshold at around 85% and stop at 90%. This will still give a good lifespan benefit over keeping the battery charged to 100%.
Those settings work with the thinkpad_acpi module, which is not always updated in older kernels for newer ThinkPad models, an alternative is to use the generic acpi_call module, which may require some customization of these settings:
cho '\_SB.PCI0.LPCB.EC0.VPC0.SBMC 3' | sudo tee /proc/acpi/call echo '\_SB.PCI0.LPCB.EC0.VPC0.SBMC 5' | sudo tee /proc/acpi/call
Note: With Ubuntu GNU/Linux and ThinkPad laptops there is a package where one can put in something like /etc/tlp.d/battery.conf lines like:
START_CHARGE_THRESH_BAT0=85 STOP_CHARGE_THRESH_BAT0=90
The downside of setting the thresholds is that when the battery is charged only at 50-70% of maximum it only powers the laptop for around 2 hours, as it typically draws 8,000mWh to 10,000mWh when using networking. That means that before taking it away I have to change the settings above and wait until it charges to 100%. Since the battery is fixed internally I cannot easily upgrade it to one with a larger capacity as I have done on my previous laptop.
Note: with older laptops that did not have charge threshold but had removable batteries I used a different approach: a small capacity battery was sacrificed to constant recharging (and lost much of its capacity, as in 60% loss) when the laptop was on a desk connected to mains power, and I kept a larger battery just for travelling.