jsburbidge: (Default)
 When I work from home, I work online, but I use Bell for internet, Telus for phone, and have an employer who seems not to rely on Rogers at all.[1]. So although I knew that some fellow employees were having to use public WiFi sites because their Rogers connections were down, I gave little thought to the outage until a planned release was deferred on account of the outage, and even then that was because it affected the availability of support staff.  Only when my daughter phoned me to tell me that debit in general was down did I find out that the failure of a single vendor has essentially brought whole blocks of commerce (plus services like 911) to a screeching halt.[2]

Aside from noting that both Rogers and other large services should be looking very carefully at their architectures for redundancy - the easy fix is probably for vendors like Interac which ought to be able to shift to having parallel vendors providing load-balanced access to communications; Ghu knows what Rogers' architecture is like and they are not being very clear - I see that there are calls for steps to be taken to provide more vendors and less dominance by a few vendors. (Essentially two: Bell and Telus share much of the same backbone system.)

This is not a new idea, although usually the reason has been the concern at limited commercial competition, not system reliability. The previous attempts to provide for more vendors have not been successful, at least from the point of view of stability and robustness of the economy as a whole. (The smaller vendors use the large vendors' hardware and rent access in blocks.) This is because the substantial cost of building another backbone is a sizeable barrier to entry.

If the government wants to have another active competitor in the market, it would either have to provide massive subsidies to a startup (this would not fly, politically and perhaps legally) or enter the market itself under a Crown corporation. (Note that the aim of such a corporation would not be to provide monopoly services, as Bell used to or as the LCBO and Ontario Hydro do; that would defeat the purpose. The aim would be to provide more different vendors.) For practical purposes this also means that prices would effectively be set by the government, not just regulated by the CRTC as they are now. (Whatever price was charged by such a Crown corporation would become the de facto ceiling for basic internet services.). It would also see considerable reductions in planned growth for the telcos and probable actual shrinkage of their markets.

Would the mandate of such a company cover all, or most, residents, or would it be confined to the areas more critical to general commerce? Practicality would argue for the former, but politics would probably demand the latter. Costs are higher as a result.

The new system itself would have to provide at every level for a high degree of redundancy and have significant overcapacity in order to handle unexpected eventualities. (Consider an existing vendor choosing to exit the market and its customers moving to the new Crown corporation; and unexpected eventualities are exactly the driving reason for such a system.)

So it would be an expensive, highly contentious, and lengthy initiative which would have to last through multiple governments. (Cheap alternatives like heterodyning IP over the power supply are most useful at the final delivery stage, and do not address the problem of providing for redundancy in the backbone.)

Do I expect this to happen? Not on the basis of a 24-hour incident - though it would be wisest to consider it a shot by the future across our bows.

[1]To the level that it was the only one of the major banks whose debit system was unaffected by the outage. Which didn't help them much, as Interac was affected, which meant that although they were up in principle connections from merchants were down.

[2] I realize that credit was not affected. In some ways that's worse, because the cost of the outage would have fallen disproportionately on the poor, who are less likely to have credit.
jsburbidge: (Default)
 A few days ago, one of my co-workers contacted me about a possible bug in some code he had been going over. Part of the code went back to the original check-in of the code when it was migrated from a system called Harvest about four years ago (and lost all if its history in the process), and a number of lines had my name in them in git blame.

A little bit of checking showed that the algorithm was essentially as it had been four years ago. Several lines were marked as mine because I had converted the macro TRUE to the boolean value true on a couple of lines, and one because I had taken a freestanding C function and turned it into a member of the class in which it operated.  For practical purposes, the code was the same as it had always been - but my name was all over it. In addition, the problem would be expected to take the form of a line having dropped out, and there is no blame tracking attached to deletions.

In actual fact, the conclusion to be drawn was that the code was legacy code. Minor tweaks obscured that fact.

Git blame operates on a per-line basis. But any change to the line - tweaking parentheses, for example, or converting a C-style cast to a C++ cast - makes you the owner of the line.

On a greenfield project where responsibility is doled out in blocks it might be useful, but on a legacy projects it's worse than useless.

By coincidence, I had been looking at the blame record for a makefile the day before.  The makefile had an if-else block where both branches had the same statements. (In other words, there should have been no if-else block, but just the list of statements.) Blame shows five different names associated with the block of the code (all of whom, except (I think) the oldest one have some passive responsibility for the poor structure) but not in any coherent manner. 

When I look at a block of code and want to see its history, I want to see how the algorithm evolved, not how different lines were tweaked while retaining the same algorithm.

You can't avoid this problem as long as your algorithms are line-based. It's a whole different level of difficult to provide a program which divides a program into logical chunks and applies that analysis to the raw record; or (worse) to determine when apparently minor changes create new logic but skip over better implementations using the same logic. (A for loop and a find_if statement may be exactly formally equivalent, but don't expect any automated help to know that.)

So I will continue to avoid git blame. If I really need to look at the history of code .... I'll look up diffs from the history of the codebase and look at them as integral wholes.

Jubilee

Jun. 2nd, 2022 06:24 am
jsburbidge: (Default)
 From Clee to heaven the beacon burns,
      The shires have seen it plain,
From north and south the sign returns
      And beacons burn again.

Look left, look right, the hills are bright,
      The dales are light between,
Because 'tis fifty years to-night
      That God has saved the Queen.

Now, when the flame they watch not towers
      About the soil they trod,
Lads, we'll remember friends of ours
      Who shared the work with God.

To skies that knit their heartstrings right,
      To fields that bred them brave,
The saviours come not home to-night:
      Themselves they could not save.

It dawns in Asia, tombstones show
      And Shropshire names are read;
And the Nile spills his overflow
      Beside the Severn's dead.

We pledge in peace by farm and town
      The Queen they served in war,
And fire the beacons up and down
      The land they perished for.

"God save the Queen" we living sing,
      From height to height 'tis heard;
And with the rest your voices ring,
      Lads of the Fifty-third.

Oh, God will save her, fear you not:
      Be you the men you've been,
Get you the sons your fathers got,
      And God will save the Queen.

- A. E. Housman

God of our fathers, known of old,
   Lord of our far-flung battle-line,
Beneath whose awful Hand we hold
   Dominion over palm and pine—
Lord God of Hosts, be with us yet,
Lest we forget—lest we forget!

The tumult and the shouting dies;
   The Captains and the Kings depart:
Still stands Thine ancient sacrifice,
   An humble and a contrite heart.
Lord God of Hosts, be with us yet,
Lest we forget—lest we forget!

Far-called, our navies melt away;
   On dune and headland sinks the fire:
Lo, all our pomp of yesterday
   Is one with Nineveh and Tyre!
Judge of the Nations, spare us yet,
Lest we forget—lest we forget!

If, drunk with sight of power, we loose
   Wild tongues that have not Thee in awe,
Such boastings as the Gentiles use,
   Or lesser breeds without the Law—
Lord God of Hosts, be with us yet,
Lest we forget—lest we forget!

For heathen heart that puts her trust
   In reeking tube and iron shard,
All valiant dust that builds on dust,
   And guarding, calls not Thee to guard,
For frantic boast and foolish word—
Thy mercy on Thy People, Lord!

- Rudyard Kipling

The Housman poem is from 1887, reflecting the actual meaning of "jubilee" (a fifty-year festival) going back to the Mosaic law. The Kipling poem is from 1897: Victoria was the first English monarch to pass the fifty-year mark (Edward III just managed 50 years). Elizabeth has passed both.

The current monarchy is a bit of a paradox: the great advantage of a monarchy in a parliamentary democracy is that the head of state is not appointed by the government and is in no way beholden to it, providing an independent check on extreme misuse of power. (The risks of a weak head of state can be seen in the facedown of Michaelle Jean by Stephen Harper.) But we want that check only in extremis; in day to day life we want the monarch to be a figurehead only.

Remove

May. 21st, 2022 10:36 am
jsburbidge: (Default)
 A few days ago I was doing a code review in which (in essence) the following loop occurred:

class Foo;

bool matches(const Foo& inVal);

std::vector<Foo> x;

std::vector<Foo>::iterator i = x.begin();
while (i != x.end())
{
    if (matches(*i))
        i = x.erase(i);
    else
        ++I;
}

I raised two issues with it: one, that there was a bug in it (it left out the else, meaning that where consecutive members of the vector matched the condition the second would be skipped).  The second was that the cost was fairly high; at the time I raised this by asking whether it made sense to replace the vector by a deque (reduces the cost for large sets of data) or a list (reduces the cost for all sets of data, but can raise the cost of other operations significantly), and was told that it would be a short vector and that the overall expense would therefore be low. (The cost is high because every time an element is deleted all the following events have to be shifted left once.)

About 24 hours later I asked about the use of remove_if and erase, shown below:

class Matcher {
public:
     bool operator()(const Foo& inVal) const {
         return matches(inVal);
     }
 };

x.erase(std::remove_if(x.begin(), x.end(), Matcher()), x.end());

I received the answer that the use of remove_if required extra work for a minor bit if code and it was left there, as far as that issue went.

But I remained curious.  How much extra work was it? So I actually drew up the code bits above and counted lines. If we leave out the setup (first three lines) the STL code snippet is actually one line shorter.

(This had to be workable in a C++03 compiler. A C++11 implementation with the STL algorithm using lambdas would be shorter still.)

What are the benefits of the STL approach? Well, first, it's not fragile; the bug caught on the code review is impossible in the equivalent STL code. The bug is directly related to another minor weakness of the iterator-based code: because of the variable incrementing logic it can't use an idiomatic for loop, but has to use while().

Secondly, it's faster; remove_if is an O(N) operation and the final erase, which only resets the end of the vector, is very cheap indeed.

Third, it is immediately clear what the code is doing. Because of some additional verbiage in the actual code (plus the missing else, which obscured the firm of the loop) I had to look twice to be sure of what was going on.

This is, in a nutshell, an illustration of why, in general, STL algorithms are preferable to hand-rolled loops: more reliable and generally faster. There are some subtle ways of using find_if and transform in unusual ways[1] but remove_if is clear about what it does.

But it's also an illustration of the implicit prejudice I find in developers against using the STL algorithms. The combined use of remove_if and erase is a very well-known idiom; there is nothing difficult about it. But the perceived - but not actually significant - overhead of having to provide a functor to feed to the algorithm [2] seems to constitute a mental barrier.


[1] For example, using find_if for "process every item in this ordered vector below a given value", for which it is admirably suited. Using for_each is less efficient, as find_if will cut off processing once the relevant value is found or surpassed if your test is written correctly. But you throw away the result of the search, which is not normally expected.

[2]Of course, even that is not real. If your test is in a function, as above, you can just pass a function pointer, although it is likely to be slightly less efficient.  If it's not in a function, it's a logical unit and either should be in a function or in a functor in any case.[3] (In the actual code the test was actually not a function, but a comparison expression. This actually did mean that the number of lines needed for the STL approach would be a little greater, as the test needed to be encapsulated for the STL where it had been merely dropped into the loop.)

[3] The marginal case being where this test is used only here. If it expresses a meaningful concept in the problem domain, though, it's a good bet that it is not.
jsburbidge: (Default)
 Well, not precisely.  But it's method of play would count as cheating for a human, and its measure of skill is based on a "cheating" algorithm.

The Wordle Bot internalises the finite set of all the answers set for the game. Note - and this is important - that this is not a complete set of the five-letter words in English, nor even a complete set of the common five-letter words: it does not, for example, include slats or thine.

At any stage it analyses the finite set for the pattern which will eliminate the most possibilities by what it will include/exclude. (If aeiou were a word, guessing it would allow you to cross off all the words which use letters which it does not use and all the words which do not have green letters in the same place.) Given the size of the set, it can brute-force this; I suspect that it actually internalises the number of words left after each valid word as a starting guess at the beginning of the day, so that it can provide that feedback to the user quickly.

The fact that it always recommends "crane" as an opening word reflects a brute-force analysis. A guess merely based on letter frequencies would omit c and probably include t. Later and rants would be better guesses based on general letter frequencies.

After each guess, it partitions the set into the remaining finite subset of the starting finite subset for that guess which match all the known conditions. It calculates the number of gusses left in the list if each word were chosen, and selects the choice with the smallest value.

If you have guessed a valid English word which is not in the complete set of Wordle answers it assigns that guess a skill of zero.  What it really means is that the user has a bigger vocabulary (which should be a plus, not a minus, in analyzing skill).

There are two points to make:

1) This is effectively cheating. By doing a brute-force analysis against the internal list kept by the game, it performs the equivalent of looking in the back of the textbook for the answer.

2) This is also not paying a game. As with any other game, just using brute force to calculate one's moves - what it effectively recommends as strategy - is not in any meaningful sense (other than the von Neumann and Morgenstern one) a game.

Expected play behaviour is for players to bring their general prior knowledge, in this case a general knowledge of the English language and not of the arbitrary subdomain which is "the set of all Wordle answers".

To approach it in a fair manner it should use as a basis all the five-letter words in the OED. Heuristics allowing solutions would have to take into use the probability that a word is current enough to be considered. (Thine I use every week; I haven't seen hight ("named", not misspelled height) in the wild in current use ever; lossy is a technical term with limited use, etc.)

Plunder

May. 14th, 2022 07:19 am
jsburbidge: (Default)
 Odd how things come up together...

I had just finished West's book on Indo-European myth and in particular noted at the end of it his identification of the deep roots of the comitatus in Indo-European culture - the king as the leader of a band of warriors whom he attached via, essentially, handing out booty or providing the opportunity to pillage.

The next day I was reading a book dealing with (inter alia) the conversion of Anglo-Saxon England, and read: "Seventh-century English kings did not 'govern' in any sense that we should recognize today. Their primary business was predatory warfare and the exaction of tribute from those they defeated. The spoils of successful war - treasure, weapons, horses, slaves, cattle - were distributed to their retainers as payment for past and lien upon future loyalty."

So the pattern described above has deep roots in Indo-European culture. Traditional poetry, whether about the Trojan War or a successful cattle-raid, reflects this.

In Europe generally, it is the Twelfth and Thirteenth Centuries which see the movement away from this pattern, at least at the local leader level. Prior to that point it was common for leaders well below the level of monarch to carry out raids and low-level warfare against their neighbours; it is in this period that the state begins to exert its centralizing powers to curtail this activity.  In areas where national borders were involved it persisted for rather longer (such as the Scots Borders). The general basis for lordship becomes, not the distribution of booty, but the conferring and defending of rights to land (or patents, or other privileges) which can generate a continuing stream of revenue for the holders.

(Booty doesn't entirely go away. Soldiers continued to be given the implicit, and sometimes explicit, permission to pillage on campaign, and even after it became entirely frowned upon (consider Wellington's army in the Peninsular War) remained (and remains) a problem. (How many Allied homes have "souvenirs", like the Roman bust discovered recently in Texas, picked up by soldiers in the Second World War?) But it was no longer the systematic basis of lordship.)
jsburbidge: (Default)
 From a Guardian article on archaeological finds under Notre Dame Cathedral :

"The find included several ancient tombs from the middle ages ..." 

No, it included mediaeval tombs from the middle ages. Ancient tombs would have to go back to the days of Lutetia. These are 13th to 14th Century, not ancient.
jsburbidge: (Default)
 It is worth noting that the decisions to drop vaccination requirements and mask mandates are, insofar as they are data-driven at all, are based, not on an evaluation of how many people may contract COVID-19, but on an evaluation of how many people will end up in hospital, and, in particular, in ICUs.

Put more bluntly, the government doesn't care about people getting sick, they care about hospital overcrowding. (This has been visible and explicit since the start of the pandemic.) They also badly want the whole thing to be "over" by the time of the June election.

It is also important to recognize that the primary benefit of masks - except at the high end, which involves respirators proper - is (1) at a population level and (2) that they protect other people from the mask-wearer more than vice-versa. So saying that people can still choose to wear masks misses the point; the people who choose not to wear masks are likely to skew less cautious in other ways and therefore as higher risk.

Masking is a low-cost high-benefit practice if generally adopted in public places. Dropping a general mask mandate does not just verge on the irresponsible but goes well into that territory.

At an individual level the obvious strategy to take is to limit going to places where there is a significant number of Individuals one does not know and to wear a respirator, not just a cloth mask, when inside public places. Shopping online is still a better option in most cases; if one chooses to shop in brick and mortar locations, relatively smaller locations with better ventilation are better choices. Voting against the government in June is also a good idea - assuming that the opposition parties are willing to back continued restrictions. (I hold no real hope of this.)
jsburbidge: (Default)
 There's a not uncommon use case in programming where you want to do something once in a specific context: the typical example being where you want to log an event the first time, and only the first time, it happens (and it's not part of the program's startup, and typically may never happen at all, but if it happens at all may occur many, many times, which is why you want to limit the logging).

There's a common idiom for this:

static bool logged = false;
if (!logged)
{
    DoLogging(message);
    logged = true;
}

We will assume for the sake of discussion that this is not a routine which can be called by multiple threads at once, which would complicate the issue somewhat.

There are two small problems with this. The first is that it's an ugly little block, especially if it's used multiple times. (You can't extract it into a function called in multiple contexts because that static variable has to be unique for each context.) The second is that it's inefficient: we have to check the flag every time through. That means branching (always inefficient) and, worse, because it's a static variable it will almost certainly not be in the memory cache, making for a slow load.

So what can we do? We need a mechanism which chooses one path once, another path after that, and which neither branches nor uses static data on subsequent runs.

If we think of it in terms of typical one-time activities, we might think of a constructor. We can do the following:

class OneTimeLogger
{
     public:
     OneTimeLogger(const std::string& inMessage)
     {
         DoLogging (inMessage);
     }
 };

In context:

//...do stuff
static OneTimeLogger logger(message);
//...do other stuff

This looks attractive, but actually it solves nothing. First, because it's not a standard idiom it's going to confuse the hell out of a maintenance programmer. Any line which requires a detailed comment saying "do not delete this apparent no-op" is a bad thing. Secondly, it actually hides an expensive if/else switch. The compiler has to emit code checking for a first time on initializing a static object, and, worse, at least post-C++11 it has to make that check, and the initialization, thread-safe. (If this *is* a multi-threaded situation with potential contention, this might mean that the maintenance cost is worth it; it's the simplest thread-safe way of doing this I know. In that case, you might want to add a no-op log() function to the class and call it in every pass, so that it looks normal to a maintainer, although then you have to explain the no-op function where it's defined. The next alternative involves replacing the unique_ptr in the solution below with a shared_ptr or putting a mutex around the point where the smart pointer is updated.)

The expensive part of the test is one-time, but the test is still there, and it's going to be on static data. All that we've done is hidden the if/else test.

The other way out is polymorphism. Assuming that the calling context is in a class and not a freestanding function, we can do the following:

class MyClass
{
    public:

    class IThisEventLogger 
    {
        virtual ~IThisEventLogger() { }
        virtual void log() = 0;
    };

    class OneTimeEventLogger : public IThisEventLogger
    {
                       OneTimeEventLogger(std::unique_ptr<IThisEventLogger>& inParent, const std::string& inMessage):
        m_parent(inParent), m_message(inMessage)
    { }
    
    class NullThisEventLogger: public IThisEventLogger {
    virtual void log() { }
    };

    virtual void log()
    {
        DoLogging (m_message);
        m_parent.reset( new NullThisEventLogger());
    }
    private:
        std::unique_ptr<IThisEventLogger>& m_parent;
        std::string m_message;
    };
 
    MyClass(): m_logger(new OneTimeEventLogger(m_logger, "Message to be logged"))
    { }
    
    void doSomething()
    {
        //...stuff
        m_logger->log();
        //... more stuff
    }
    
    private:
     std::unique_ptr<IThisEventLogger> m_logger;
};

The trick is that the reset call in the logger is a fancy way of doing "delete this" (i.e. committing suicide), which is entirely legal in C++. (Also, passing in the address of the parent while constructing the parent is fine, because nothing happens with it until the object is fully constructed.) We choose to pass the message in the constructor because it reduces the cost of the no-op call to a bare minimum.

The first time log() is called, the message gets printed, and then the execution path for all future calls is changed to a no-op function. We still have a virtual call, but that should be on an object in the cache, and virtual calls are typically cheaper than branching. The call site is simplified; the only time complexity appears is when looking into the very detailed implementation, well away from the business logic in doSomething();

If the logging logic is sufficiently generic, the machinery for managing this can be extracted and reused so that it doesn't clog up the interface of the calling class. If the logic is complicated internally then the logger and the interface will have to be created locally. (If we have to log four variables, two of them integers and one floating-point as well as a string, we want to put the expense of generating the message to log into the one-time call as well, so a generic "pass a string to log as a parameter" may not be a good match, and pre-generating the message in the constructor, as above, may be impossible -- though usually something logged once and for all will have a very generic message).

The downside is that you need a separate logger for every message you want to print once - a classic trade of space for time. Of course, if your parent class is well-designed, it will have limited responsibilities, and the number of instances it will need will be correspondingly small. And the logger itself is cheap to construct - not much larger or costlier than the text of the message it logs; if it builds a custom message its content model is even simpler and its associated costs are even smaller.

ETA: you can make this more generic and more encapsulated by making the smart pointer a custom class which hides the delegation entirely and implements the logging interface (and, naturally, holds a smart pointer as a delegate).
 
jsburbidge: (Default)
 ... will Justin have the guts to follow his old man and say "Just watch me."?
jsburbidge: (Default)
 1) On checking my spam folder, I see that I have received two invitations to join the Illuminati, one in Italian.
 
If the AISB is going to contact anyone it will not be by cleartext e-mails. They will use proper tradecraft.
 
2) It is remarkable just how awful protesters' historical education is. The position of the Prime Minister has always, since the time of Walpole, been determined by the House of Commons. The Crown has no power to dismiss the PM and only very limited powers to prorogue Parliament (essentially, when the PM has lost the confidence of the house, or at the request of the PM). This was firmly established in 1649 and 1688, with tweaks in the 18th Century as the office of the Prime Minister developed.
 
3) I am getting tired of public health officials who are reported in the media as talking about masks and vaccines as though they were purely about individual risk rather than looking at the impact in populations of general adoption/dropping of particular activities. A 30 year old with two doses of vaccine who goes out without a mask is at a low risk of contracting symptomatic Covid and at very low risk of serious disease. But if 30-year olds in general do that, there will be a calculable increase in the spread of COVID to other parts of the population. Wearing a mask or being vaccinated is not principally about personal risk, in many cases; it is about being a responsible member of the body politic and of society.
 
4) Seen in real code, names slightly adjusted: 
 
class XKey
{
    public:
    XKey(const int inIdA, const int inIdB, const std::string& inName):
        m_idA(inIdA), m_idB(inIdB), m_name(inName)
    { }
 
    bool operator<(const XKey& inOther) const
    {
         return (m_idA < other.m_IdA) && 
             (m_idB < other.m_idB) &&
             (m_name < other.m_name);
     }
     private:
     const int m_idA;
     const int m_idB;
     const std::string m_name;
 };
 
Surprising things will happen when you try to use a map with that as a key.
 
Don't do this.
jsburbidge: (Default)
 There's an old adage about being very careful about how you align a system to match its professed goals: if the goals are out of alignment, even though the intent may be clear, at least some actors will try to game the system.

(A classic example in software development is to measure productivity in lines of code, and make compensation dependent on productivity. At best, this provides no incentive for clean, concise code; frequently, it actively encourages coders to write deliberately verbose code to boost their lines of code.)

I have recently observed a specific example of this in another domain, namely, the TTC.

I regularly use a bus route which is relatively short and which is essentially a direct line from the station for most of the route, but splits into a loop at the end. I catch it on the leg going back to the station.

During rush hour, the predictions provided by the feed based on real-time data from the TTC are fairly accurate. Mid-day, they regularly would result in missing the bus.

The route is scheduled with busses N minutes apart, based on the number of busses on the route at the time; every 15, 20, 30 minutes (10 at rush hour).

To maintain the schedule, the drivers are supposed to get to the far point of the loop where there is a stop they can wait at and then come back at a scheduled time.

Because the apps available for TTC predictions track busses in near-real time, I can see what happens: busses wait at the wait point until they are three minutes early, and then leave. This means that what was a ten-minute prediction when I looked at it, and a six-minute prediction at the point they start to move, suddenly collapses by a significant amount.

What is going on? The problem lies with the TTC's metrics. They count a bus as being on time if it is within a three-minute window on either side. This is meant to allow for problems with heavy traffic or construction which drivers can do little about, or for random times when fewer people are waiting for stops than the statistical average, speeding up the bus by reducing the number of stops. It is not meant to encourage drivers to start moving as soon as they are technically "on time", but that is what systematically happens.

They start moving early because they will get to the station "on time", but really three minutes early, and then have a three-minute longer break. (The same drivers tend to arrive on one side of the station, let off their passengers, and then sit there until just before they have to leave, proceeding to their loading bay only at the last moment, even in very unpleasant weather. The TTC consistently conveys a sense of being run for the benefit of its employees rather than its customers.)

At rush hour there are no delays built into the schedule; the only divergences from longer-term live predictions will be as a result of heavy traffic, which is rare on this route.

This leaves users with no recourse. The drivers have committed no formal infraction. The TTC could analyze the collected route data and penalize this behaviour if a driver engages in it systematically, but I'm willing to bet that the likelihood of the Union to grieve such a change in the rules is a barrier to such a step.
jsburbidge: (Default)
 "The old year now away is fled,/The new year, it is entered": I am beginning to wonder whether 2021 may not end up being seen as a hinge year.
 
Personally, it has not been a particularly notable year either for good or bad. Work from home is essentially a non-issue, as most of my colleagues work in other countries and my interaction with them would be entirely online in any case. As an introvert, I feel little affected by the limitations on social interaction.
 
The one interesting marker of, perhaps, a lack of focus is books read. A number of years ago there was an interview early one year on CBC with somebody who had made the resolve to read one book a week.  The interviewer was fawning over this as a difficult task; my reaction was that, unless this was Ulysses or Tristram Shandy every week it was hardly worth noting. So I started keeping lists of books read per year. Over the past few years it has tended to run in the mid-seventies or over per year. This last year it was 56, a drop of almost twenty books - and that's in the absence of a compulsive tendency to waste time on news when Trump was president which had previously been a drag on my time.
 
This might be a question of what I read - looking back in detail, last year also has fewer books which I would rate very highly and more which I would rate as mediocre. It is certainly not competition with other media (I do not watch television and rarely stream video online). I put down some if it down to additional energy subsumed by working from home on tasks which I enjoy (and am therefore likely to spend extra hours working on which would not be available if I worked at the office). Some of it may be the absence of a commute in which I had daily time to read. But there is certainly a difference.
 
On the public stage, though, 2021 looks very much like the year which showed that the forms of representative democracy which have been used for the last century and a half, more or less, are failing to be effective in the face of crises such as COVID-19 or climate change.
 
The dominance of the short-term feedback from a relatively frequent electoral cycle over considered expert advice is becoming a critical issue.
 
It also begins to look as though the model we have (in which the assumption is that pressures will tend to moderate the views of successful political parties) may be valid only when a rising tide is lifting all boats  When there is a drumbeat of bad news combined with permanent insecurity recent experience suggests that the tendency is to create self-reinforcing cycles which drive parties away from consensus and towards polarization.
 
We certainly need the feedback from the general public as a check on government. Absent that, one gets arbitrary and autocratic rule which might be efficient and effective but is vastly more likely to be corrupt, inefficient, and driven by goals other than the public good. The question of how to get that level of feedback without the serious effects we can see from the predominance of short-term benefits over long-term ones is not easily resolved, and I have no answer.
 
Canada as a whole at least has a management crisis only (contemplate the Liberals for a moment), and one may hope that Ontario will give up on the PCs if they fail hard enough in the next few months of COVID management; the United States looks like it has a governance issue at the level of "on the edge of a civil war". The UK is heading down into some form of breakup of the Union accompanied by massive financial difficulties and severe restrictions on civil liberties (in England) for those who are not considered English enough. Europe shows governmental churn, internal tensions, and a momentum provided by its bureaucracy.
 
Socially, the culture wars continue to rage, but it looks as though majority sentiment is now firmly on the side of the new culture, even if that support's concentration in urban areas is a partial blocker on its political expression.
 
It was also a year full of wake-up calls on the climate front. (As also a year full of governments which have made general declarations of taking the issue seriously but are falling far short in practice.)
 
On the COVID side, people may be coming grudgingly to the acknowledgement that things will not return to all the old patterns (commercial, leisure, other activities). Expect the steady advance of the claim that internet access is a right like heat and water as remote access takes over as a norm for many things.
 
----
TL;DR summary
 
Best book read in the year: The Ethical Poetic of the Later Middle Ages by Judson Allen (Interesting and informative. Unfortunately the author died young(ish) shortly after the book was published and produced no more work. I had met the author once or twice (during a stay in France) and it was edited by a friend (now also dead untimely); this did not bias my reading of it.)
 
Worst book read in the year: The Invention of Yesterday: A 50,000-Year History of Human Culture, Conflict, and Connection, by Tamim Ansary. (Really badly researched. I think I posted about this.)
 
Thing I missed most that I couldn't do due to COVID restrictions: go to Midnight Mass
 
Thing I missed least that I couldn't do due to COVID restrictions: commute
 
Best political news of the year: it continues to look as though the CPC is self-destructing as knives get pulled out because their new leader only increased their popular vote without actually winning the election.
 
Worst political news of the year: it looks as though the US Republicans are generally doubling down on being worse than Trump. (No, really, he gets booed when he tells them he's vaccinated.)
 
Summary of the year: Dryden's The Secular Masque ( https://www.poetryfoundation.org/poems/44184/the-secular-masque )
jsburbidge: (Default)
 As I in hoary winter’s night stood shivering in the snow,
Surpris’d I was with sudden heat which made my heart to glow;
And lifting up a fearful eye to view what fire was near,
A pretty Babe all burning bright did in the air appear;
Who, scorched with excessive heat, such floods of tears did shed
As though his floods should quench his flames which with his tears were fed.
“Alas!” quoth he, “but newly born, in fiery heats I fry,
Yet none approach to warm their hearts or feel my fire but I!
My faultless breast the furnace is, the fuel wounding thorns,
Love is the fire, and sighs the smoke, the ashes shame and scorns;
The fuel Justice layeth on, and Mercy blows the coals,
The metal in this furnace wrought are men’s defiled souls,
For which, as now on fire I am to work them to their good,
      So will I melt into a bath to wash them in my blood.”
      With this he vanish’d out of sight and swiftly shrunk away,
      And straight I called unto mind that it was Christmas day.

- Robert Southwell
jsburbidge: (Sky)
Soldier from the wars returning,
Spoiler of the taken town,
Here is ease that asks not earning;
Turn you in and sit you down.
 
Peace is come and wars are over,
Welcome you and welcome all,
While the charger crops the clover
And his bridle hangs in stall.
 
Now no more of winters biting,
Filth in trench from fall to spring,
Summers full of sweat and fighting
For the Kesar or the King.
 
Rest you, charger, rust you, bridle;
Kings and kesars, keep your pay;
Soldier, sit you down and idle
At the inn of night for aye.

-- A. E. Housman

Yet Again

Oct. 26th, 2021 09:51 pm
jsburbidge: (Default)
 This surely does not need saying: the provincial plan for removing Covid restrictions is both insane and driven by politics. It is no coincidence that on this fanciful roadmap of the Ontario PCs everyone becomes entirely free to do as they will a couple of months before the provincial election.

In what fit of absence of mind they concluded that Covid would be so completely gone in another six months that they can plan on pulling back vaccine mandates, it is hard to imagine. The opening stages - allowing more density in places with vaccine mandates - is not prudent, but it is somewhat understandible. To assume that magically Covid will go away by March is pure wishful thinking. (Even if by some miracle Ontario could wrestle it to the ground, there would still be extensive reservoirs elsewhere.)

This is the extreme form of the government's complete failure to take the one critical step that is necessary: to say, clearly, that there will be no "return", no ability to resume the life of 2019. Even if the government were to drop its vaccine mandates entirely, all the employers and other fora will continue to worry about insurance and liability and are unlikely to drop their restrictions. Many people will, rationally, continue to avoid places where they are crowded together; although evidence from other jurisdictions suggests that fewer people are rational than one might hope. Masks will continue to be an important public health tool. Many people will continue to work from home, affecting the life in urban centres.

The one genuinely bright spot on the horizon is the (likely) very near approval of Covid vaccines for children and the (equally likely) of such vaccines being made mandatory (as many other vaccines are) for attendance at school.

It is, frankly, the task of the government, the parens patriae, to tell everyone the truth: life has changed, irreversibly (like one of those catastrophe theory transitions on a folded manifold).
jsburbidge: (Default)
 A couple of months ago I decided I needed a new belt - my previous one had lasted for years and was getting slightly decrepit - so I went to the Bay and bought a "Perry Ellis Reversible Smooth Leather Belt"; all the belts were much of a muchness as far as price was concerned and it was about the plainest belt I could find.

It lasted about two months.

After it broke, I found that the leather, instead of being sewn around a metal bar to attach it to the buckle, as is normal with belts, was actually anchored inside the buckle to a couple of narrow metal posts. So all the stress of wearing the belt in any functional manner was concentrated in those points, which ripped the leather in, essentially, no time.

This is worse than shoddy workmanship; this is poor design. It does not go so far as to run afoul of the Sale of Goods Act ("merchantable quality" being the only key term), but it amounts to a latent defect serious enough that I would never buy any goods from that manufacturer again, and look askance at the Bay for their purchasing decisions.

I replaced the belt with one from Harry Rosen. The belt cost four times as much but at least I have confidence that it will last a reasonable time. (My main problem with Harry Rosen was finding a belt in my size plain enough; the one I would have preferred was not available in my size at the store but the next one down my list was.)
jsburbidge: (Default)
 I normally have a good sense of where an election will end up a few days before it actually takes place. This one was more difficult: two days before the election I would have called the most likely result a reduced minority for the Liberals. In the end, the needle barely moved on numbers.

Last election I was already noting that the Liberals would do well to get rid of Trudeau, though that was before Covid, which pushed up his stock somewhat. I would reiterate the comment; he is clearly a drag on his party, but his retention of office without a significant loss of seats probably leaves him with enough power to avoid being pushed out.

What the Conservatives need to do, to deflect their usual problems next time round, is take the relatively moderate platform O'Toole ran on and maintain it up to the next election. As I gather that there are members who already want to depose O'Toole, I doubt they will manage it.

(The best grounds for attack on O'Toole the Liberals had, they wasted: pointing out the drift between his run for the leadership and his run for PM and suggesting that he is a weathervane seems to me like the obvious weak point. By next election he will be gone or his platform will have some stability. I doubt he can reinvent himself yet again.)

But the map shows the kind of current electoral trap Canada is in. There are two broad areas of solid blue: the west (for all practical purposes: the NDP has a few new seats there), and rural Ontario. (That these are the areas with the biggest Covid problems is not coincidental: in addition to Alberta and Saskatchewan, about which little needs to be said, it is the rural caucus in Ontario which by all accounts has been most opposed to reasonable anti-Covid measures.) In Ontario the whole GTA with a couple of exceptions went Liberal, and of those exceptions only one - Thornhill - went Conservative.Other urban areas voted Liberal, except where they were weighed down by large rural components of a single riding (such as Peterborough).

There are two big blockers to any majority government. First, the Bloc essentially prevents the Liberals from gaining a majority; their only other normal way there is a thoroughgoing collapse in the Conservative vote. (Both Bloc and Conservatives have been weak before; the Reform/PC split gave the Liberals under Chretien a long time in majority office, taking all, or nearly all the ridings in Ontario to make up for the Bloc in Quebec, and the Bloc was weak in 2015.)

The Conservatives, on the other hand, have tended to win only when dislike or distrust of the Liberals - Chretien's cronyism, Trudeau's / Turner's patronage appointments, distrust of Dion and Ignatieff - drives the electorate in the swing ridings of Ontario to abandon the Liberals for the Conservatives. Although the Tory base certainly feels that way about Trudeau, his weaknesses among the general population - seen as naive, self-centred, not terribly bright - have not, to date produced that result. So the Conservative road to a majority runs through the suburbs of Ontario. The past several elections have been marked by a significant number of voters voting, not for the Liberals as such, but against the Conservatives when there is a risk of them winning, and younger voters in central Canada are even less likely to vote Conservative. To make things worse, the Covid crisis has made government with an activist and intrusive bent more generally attractive, which means that the Harperite/libertarian block making up the core of the parliamentary party is even more out of step than they might otherwise be.

The geographical split encourages the parties to vanish into the event horizons of their own bases, which decreases the likelihood of a breakthrough.

The NDP lacks a base which could realistically allow it to come within shouting distance of power, except of the sort they have right now as the key to the continued governance of the Liberals. The Greens essentially disintegrated.

All that being said, it would be unwise to claim, as I've seen some commentators do, that Canada is locked into a permanent minority situation, or - a closely related claim - that the Canadian people chose a minority. Over half of active voters almost certainly belong to the bases of the two major parties. Presumably they wanted a majority. (The NDP base might welcome a minority, but I doubt they voted for one.) An average of people's choices, filtered through FPTP, cannot be given a notional will or mind, any more than the players of Prisoners Dilemma want the probable outcome.

More critically, a lot turns on immediate personal reactions. The voters of Ontario elected Doug Ford because they were fed up with the Liberals, despite the chaos of the Conservatives and the very real doubts about Ford's suitability to govern. If the Liberals do something stupid, not necessarily more stupid than the various kerfuffles which marked, but did not end, Trudeau, O'Toole or an equivalent could walk into government. If the PPC or an equivalent splits the CPC vote more heavily and along structural fault lines, we could be in for another King-St.Laurent reign.

Or at another level, if post-Covid remote work sends more urbanites out into semi-rural Ontario, that block of rural Ontario CPC ridings could become far less certain. If the Alberta NDP becomes the government again and the post-Kenney PCs became a hissing in the dark, that solid blue block could splinter. Or if the dislocations of climate change throw most people's lives into uncertainty, the resulting resentment may drive some voters towards the right while driving others towards "a safe pair of hands". The current social and economic context is the opposite of stable.
jsburbidge: (Default)
 ...regarding Addison's The Witness For the Dead, but a little too long to go in as a comment.

The Witness For The Dead has three themes; one is resolved entirely; one is resolved for practical purposes; one has a level of resolution which I can best describe as a removal of difficulties. They interact in each others' solutions, giving an ultimate unity to the story which looks like three separate strands until very close to the end.

The first strand is the mystery strand, the one which follows standard conventions of the mystery novel, posed as a problem to be solved: the perpetrator is identified, and it is established that there will be no more crimes from that source.

The second strand is what I may call the feuding family strand. It is not posed as a problem to be solved, and its main dynamic is actually outside the scope of the story; Thera keeps getting dragged into it and having problems put in his way as a result of his role - it forces the temporary assignment with the ghouls and the vigil confronting the ghosts of the past, but it looks entirely unconnected until it provides a key for the resolution of the mystery plot to Thera. It passes out if his life; although it presumably remains an issue for the participants it will have no further impact in Thera.

The third strand is the problem of Thera's personal situation: dealing with a hostile hierarchy, socially isolated. This has no final resolution as such, but events driven by both the other plots combine to improve his professional situation and provide what looks like a more positive future for his social situation.

(This is somewhat reminiscent of an analysis Sayers did, well after writing the novel, of Gaudy Night. She identified three different types of problem with different types and degrees of resolution there as well, with (similarly) the detective problem being the one with the neat solution. It may be worth remembering that Addison aka Monette aka Truepenny did an extended critical analysis of the Wimsey books some years ago on Livejournal and is presumably aware of the Sayers self-analysis.)

For the record, I thought it well done and not difficult to follow and would recommend it to anyone looking at the intersection of the fantasy novel and the detective novel.
jsburbidge: (Default)
Ripped from the headlines tonight: the Biden tax proposals include a 3% surtax on anyone earning more than 5 million (USD) per year.

There is an argument, if not for Commonweal levels of, um, levies, then at least for the pre-Thatcher UK where the top brackets were in the 99% category...

Profile

jsburbidge: (Default)
jsburbidge

April 2025

S M T W T F S
  12345
67 89101112
13141516171819
20212223242526
27282930   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 27th, 2025 03:40 pm
Powered by Dreamwidth Studios