1,055 books…

… are in my personal library, yay!

I had been meaning to digitally catalog my book collection for some time now. I have on several occasions found books at used bookstores that I wasn’t sure whether or not I owned yet (typically books in a series or books by prolific authors). So I finally used a free app called Libib to digitally catalog the books I own (not including eBooks at the moment; I only have perhaps a dozen of those). Next time I am wondering the shelves of a used bookstore, I can now search the app to be sure of what I have and what I don’t. Even while cataloging the books, I found a few books to weed out because I have multiple copies of them.

You can scroll through my library here: https://shannifin.libib.com/

(Unfortunately there does not yet seem to be a way to sort the public listing in any other way besides by title.)

I get a majority of books used, and have walked away with some big loads for cheap prices when stores are going out of business or getting rid of excess. I’m sure I still spend too much money on books considering my slow reading speed, but they’re addicting to collect, aren’t they?

I’ve only read around 10% of these books. Of course, some books are more for reference and not really meant to be read from front to back anyway. Still, with my current reading speed, I will likely die with the majority of these books left unread. Which is fine, because upon death I will have access to infinite knowledge… I hope.

Anyway, if you’re a book lover or collector and wish to digitize a record of your catalog, Libib is the best free app (for Android) I’ve come across so far. It also allows you to export a CSV file, which is handy.

Statistics do not determine probability

Well, that really depends on what probability you’re asking about. Perhaps it is more clear to say: The statistics of past events do not determine the probability of future events.

(At least not in and of themselves.)

An obvious example: Suppose you flip a coin three times. Your statistics, especially with the sample set being so low (and odd for that matter), naturally won’t reflect the intuitive 50/50 probability of flipping heads the fourth time.

What if you flip a coin 10 times and get heads each time? Does flipping 10 heads in a row imply anything at all about probability of flipping heads on your eleventh flip? (The answer is no.)

I bring this up because it’s annoyingly astounding how many times people will bring up statistics as evidence of societal privilege, oppression, or institutional racism / sexism.

For example, one may find that at a certain company, only 5% of the employees are black, and 95% are white. Does this mean a black person picked from the general population at random is far less likely than a white person to get a job there? Of course not. Firstly, that statistics of who’s already been hired doesn’t tell us anything about applicants who weren’t hired (are less black people applying in the first place?), and secondly, we’re ignoring quite a lot of other variables, such as interest in what the company does and necessary qualifications.

To make the fallacy a little more obvious: Suppose the company has 100 employees, 5 of which are black (thus 5%). Then a white person retires and they hire a black person in his place. Does this mean the probability of any random black person getting a job there just rose by 1%? That is, does hiring a black person increase the probability of any random black person being hired? Obviously not. (At least, I hope it’s obvious.)

And yet this fallacious way of interpreting statistics is brought up again and again in discussions of race and sex and privilege, as though the statistics of past events alone somehow determine the likelihood of your future. (“You have so many opportunities! Just look at the stats!”)

What’s even sadder is that this way of thinking seems persuade amiable people to believe that they have some kind of moral obligation to put themselves down based on their race or sex for the greater good, as in: “I shouldn’t apply for that job because I have white male privilege; that job should really go to a minority who doesn’t share my privilege!” or “I shouldn’t seek financial aid for my white children because they already have so many opportunities already just by virtue of being white!”

You don’t make the world better with that sort of thinking. You make it worse.

Font rendering: stb oversampling vs NV path rendering

I told you I wanted to try stb oversampling in my last post, and I did! Here is the result (stb 2×2 oversampling on top, nanovg in middle (which is based on stb anyway, but no oversampling), and NV path rendering on bottom, screen size then 5x zoom):

So stb oversampling definitely looks the best, although it is still pretty fuzzy. And moving it around by subpixels looks very decent; it doesn’t get very much of the “shimmer” effect. (Still a bit, but not much to be bothersome.)

Here’s stb oversampling (top) vs NV path rendering (bottom) with a bigger font size (screen size then 2x zoom):

Here, I think NV path rendering looks better; it’s definitely less fuzzy. (The trade-off is that it does suffer from more “shimmer” when translated by subpixels, but it doesn’t bother me too much.)

You can also see that NV path rendering is able to utilize proper kerning: the ‘e’ is slightly below the capital ‘T’, as it should be. Each letter isn’t being drawn on a textured quad, so overlapping is trivial. (Well, for the end library user, at least.)

So, I think that completes my foray into font rendering for now. I’m too lazy to make bitmap fonts at the moment; stb oversampling will work for smaller fonts for now. Time to continue on to other GUI elements. I will try to design the GUI system in such a way that it will be able to utilize any font rendering system should I wish to create bitmap font support in the future.