I recently finished reading The Case Against Reality by Donald Hoffman.
It’s a short book, only 200 pages, but still felt too long. Too much filler and repetition. You’re perhaps better off watching an interview with the author on YouTube.
The main premise is simple: we don’t see reality as it truly is, but rather as it relates to our evolutionary fitness.
Some obvious examples of our limited perceptions include:
- We only see certain wavelengths of light; we cannot see infrared or ultraviolet.
- We only hear a certain range of frequencies of sound.
- Our sense of smell is very limited, and often comes with instinctual judgments of pleasantness or disgust.
- We cannot sense oxygen in our lungs; rather, we can only feel the effects of having too little.
- We experience being surrounded by solid things, yet atoms consist of mostly empty space.
- Lots of optical illusions clearly trick our visual perceptions.
This means that everything we perceive in the physical world is actually a high-level abstraction of some unperceived foundational reality. A book, for example, only exists in our minds as a concept, a collection of perceptions and sensory experiences. These perceptions correspond to things in physical reality (that we can’t perceive directly), but they don’t actually exist in physical reality.
The book’s author compares the mind-reality relationship to icons on a computer. Using a computer, you manipulate highly abstracted icons, imagining that files have physical spaces and locations. (The word “file” itself is an abstraction to aid the metaphor.) Inside the computer, everything is just 1’s and 0’s passing through transistors. But it would be completely inefficient to try and derive meaning from those long binary strings, so we work with high-level abstractions, colored pixels on a screen that correspond to those 1’s and 0’s. “Files” don’t even really exist in memory; computer memory is just a big collections of ordered 1’s and 0’s. Files don’t exist until some program (like an operating system) makes some determination of how to separate the bits into separate groups, which is ultimately decided by a human mind, which is where all the meanings of those 1’s and 0’s are derived from in the first place.
OK, that’s all well and good, but so what?
Well… I don’t know. The book doesn’t really go into why understanding this might be important. Perhaps it may help you to appreciate the possibilities of other perspectives, I guess? Help you not take your perceptions for granted, or take for granted the meanings you’re imbuing things with yourself? Or appreciate that there’s a ton of reality that you can’t even see? Perhaps it has some applications for AI or something?
Interesting stuff to think about anyway.
The last chapter is the most confusing. The author starts talking about what he calls “conscious realism“, which I can’t claim to understand very well. He writes on page 184:
If we grant that there are conscious experiences, and that there are conscious agents that enjoy and act on experiences, then we can try to construct a scientific theory of consciousness that posits that conscious agents—not objects in spacetime—are fundamental, and that the world consists entirely of conscious agents.
Um… OK?
Actually, I once had a dream in which I understood that reality and spacetime are created collectively by consciousnesses, so I find the idea compelling. On the other hand, I really don’t understand the idea any deeper than that. On some level, it feels like just playing semantic games with “reality” and “consciousness”, which is maybe all one can do.
(If I say “A book exists only in one’s consciousness”, is not such an existence just as valid, perhaps even more valid, than some other sense of existence?)
On page 190, the author goes on to write:
The definition of a conscious agent is just math. The math is not the territory. Just as a mathematical model of weather is not, and cannot create, blizzards and droughts, so also the mathematical model of conscious agents is not, and cannot create, consciousness. So, with this proviso, I offer a bold thesis, the Conscious Agent Thesis: every aspect of consciousness can be modeled by conscious agents.
I still don’t really get it. Also, don’t you still have to answer what consciousness itself is? (And can you?)
So, overall, some interesting ideas, but I’m not quite sure what, if anything, I can do with them.