The adolescent brain?

Blakemore says: “Adolescence is defined as the period of life that starts with the biological, hormonal, physical changes of puberty and ends at the age at which an individual attains a stable independent role in society.”

I’m not sure I understand this definition. The onset of puberty is pretty objective, but how do we define what a “stable independent role in society” is? Isn’t that what modern society actively tries to prevent teens from having by forcing them to spend their days with high school and homework, with the only adults they know being figures who are telling them what to do?

In other words, the definition seems to say: “Adolescence starts with puberty, and ends when we adults decide it ends.”

Blakemore discusses a behavioral study in which a subject is asked to move objects around from the point-of-view of someone else. Studies show that, on average, adults are better at this task than adolescents. That is, adults make fewer errors. The conclusion is that, Blakemore states, “the ability to take into account someone else’s perspective in order to guide ongoing behavior, which is something, by the way, that we do in everyday life all the time, is still developing in mid-to-late adolescence.”

I’m not convinced this task so simply represents one’s ability to “take into account someone else’s perspective.” Nor would I imply that a lower error rate on this task necessarily correlates with better social behavior, such as the ability to control one’s anger in the face of hostility, or the mistake of perceiving someone else’s comments as personal attacks when they are not. I’m not sure these test results tell us anything useful about teenage behavior as a whole.

We could easily imagine someone practicing this task to such an extent that they attain an error rate of 5% or less. But who would argue that these people would thus behave better in emotional social situations? (And how would we define “better”?)

Blakemore goes on quote Shakespeare’s The Winter’s Tale:

“I would there were no age between ten and three-and-twenty, or that youth would sleep out the rest; for there is nothing in the between but getting wenches with child, wronging the ancientry, stealing, fighting. … Would any but these boiled brains of nineteen and two-and-twenty hunt in this weather?”

This, to Blakemore, is evidence that adolescence is not a recent phenomenon.

Firstly, in the big scheme of human societal development, Shakespeare is quite recent. But I think it’s important to note that there is a difference between perceptions of there being an “adolescent” stage of normal human development (and that we should, as a society, take measures against it), and the notion that your own generation, and your own status within it, is the best, or at least not the worst. To think that the “young people (or any social group of which I am not a part) of today are not as skilled, or as intelligent, or as decent as me” is certainly not a new thought. What is the difference between the socially-defined stage of “adolescence” and classic human age-ism?

Blakemore goes on to discuss risk-taking and the role of the limbic system, concluding that teenagers take more risks because the rewards from the limbic system are hightened. But how do we define whether or not a task is “risky”? Does the limbic system’s rewards only respond to tasks that the rest of the brain has come to understand as “risky”? Does peer pressure make a task seem less risky? What if this has nothing to do with risk at all? We really gain nothing from this point.

Finally, Blakemore tries to relate this all to education, saying: “40% of teenagers don’t have access to secondary school education. And yet this is a period of life where the brain is particularly adaptable and malleable. It’s a fantastic opportunity for learning and creativity. So what’s sometimes seen as the problem with adolescence, heightened risk-taking, poor impulse control, self-consciousness, shouldn’t be stigmatized. It actually reflects changes in the brain that provide excellent opportunity for education and social development.”

It’s a bit of an empty statement, as we don’t know what exactly she’s defining “education” to be. Are we meant to conclude that today’s education system is doing unseen good for teenagers? Are we meant to conclude that older people lose their ability to learn because their brains aren’t developing in the same ways? Are we simply meant to feel inspired? I don’t know.

(Unrelated digression: Blakemore mentions that the prefrontal cortex is proportionally much bigger in humans than any other species. I imagine the point of mentioning this is to imply a correlation between prefrontal cortex proportional size and intelligence. But we judge how intelligent other living things are by how their behavior compares to ours. We assume we’re smarter than any species that can’t talk, or can’t solve problems in ways we can understand. But is that assumption valid? Can intelligence be plotted linearly, and therefore be easily judged with greater-than, less-than comparisons? I don’t mean to imply that I believe humans don’t have unique brain powers among all the other species on our planet. I only mean to assert that intelligence is not a simple matter of comparing abilities (or, by correlation, brain properties, like the proportional size of the prefrontal cortex), because we can only compare abilities that are within our power to understand, and for something to be beyond our intelligence does not imply that it is somehow more or less intelligent; simply that it is a different intelligence.)

Five ideas to change the way you see the world

1. The idea of emergence
2. There are no secrets to success
3. School is stupid
4. There’s no such thing as a genius
5. There’s no such thing as a teenager

Here are my top five worldview convictions; ideas that I was not raised believing but came to accept through thought, observation, and communication with others.  In a sense, they are like epiphanies; for each idea there was a time I had either no idea about it or believed the opposite.  And all of them are subjects of debate; for each one of them there are plenty of people out there who vehemently disagree with my position.

The books listed are simply the best ones I’ve read on the subject.  Although certain books have certainly helped convince me of some of these things, please do not think that I believe anything blindly; there are plenty of authors I disagree with.  A book’s contents and ideas are always subject to my own observations, analysis, and judgment.

1. The idea of emergence

OK, this first one isn’t necessarily that anti-intuitive, but it’s something a lot of people still seem to have trouble understanding or accepting.

There are still ongoing debates about how exactly to define this idea of emergence, but I’ll define it like this: an emergent property is a large scale property that emerges from a bunch of small, usually simple, interactions on a small scale.

A simple example might be a rush hour traffic jam.  A bunch of people get off work and drive home at the same time.  A traffic jam emerges from a bunch of individual decisions to drive at that specific time.  A traffic jam itself is a collection of cars; one car is not traffic jam, and a traffic jam can be made up of different cars at different times.

A famous example is John Conway’s Game of Life.  Conway made a grid and came up with a few simple “breeding” rules.  A square on the grid is either living or dead, on or off.  Then the player (or, more efficiently, a computer) uses the breeding rules for each square to determine if it would be living or dead in the next iteration (or generation).  Might seem boring, but playing around with it for a while, one can easily see patterns emerge, structures that cycle through patterns, structures that cycle but move around, structures that build other structures, etc.  All from a few simple rules applied to a bunch of grid squares.  The point is that they all interact with each other, and the patterns emerge.

Another example would be life itself, and nature’s use of DNA.  When combined with the machinery of a living cell (life doesn’t just pop up around a DNA strand all by itself), DNA contains instructions on what proteins to create.  From a bunch of small physical chemical interactions, a body grows.  Hands, brains, eyes, teeth, hair, etc.  It’s all encoded in the DNA, and it all emerges with trillions of tiny chemical interactions.  It’s important to understand that a physical body is the outcome of these interactions; though it’s encoded in the DNA, it’s not actually in the DNA.  Similarly, a music file encoded in a computer is just a long string of 0’s and 1’s, but it’s not music until this sequence is interpreted by a computer, played back through speakers, and ultimately heard by ears.  We can’t just look at the string of 0’s and 1’s and know how the music would sound.

One reason emergence can be hard to grasp or agree with, especially in the context of living systems, is that we humans tend to perceive intent, even when there’s no intent.  (There might be a more technical word for this problem, but I don’t know it.)

When we seek a reason for an event (or for the existence of something), we can seek two sorts of answers: intent to be fulfilled (a purpose), or a causal reason (cause and effect).  For example, if we ask “Why does the heart pump blood?” we can give two sorts of answers: an intent to be fulfilled (“The heart pumps blood to provide the rest of the body with supplies that travel through the blood”), or a causal reason (“The heart pumps blood because the brain sends a signal to it and its muscles contract”).  We can understand both these answers, but one is wrong: the heart has no consciousness; it doesn’t care what the rest of the body needs; it doesn’t do anything on purpose.  So why is it so natural for us to give the heart the human ability of having intent?

We can simulate similar systems in which emergent properties arise on a computer using genetic algorithms.  For example, we can program a robot to roll through a maze based on simple rules.  But we can also program the robot to figure out those rules on its own.  When it’s done, the rules might seem intelligent to us, as if the robot thought about his problem and solved it with intent to solve.  But really it’s just all the outcome of the simple rule-making rules of our program. (Unless, of course, we have succeeded in programming consciousness!)

If you think about genetic algorithms, it’s not really an amazing feat.  You just have the program come up with a bunch of random rule sets, test them, and weed out that ones that don’t produce the results you want.

The same thing happens in real life.  If the rules of making a life form (as dictated by the DNA) cause the life form to die before it breeds, its rules won’t be passed on.  Duh.  So in the end all we get are rules that “passed the tests.”

Although we do not yet know the exact science of it, emergence makes it quite plausible that God is not needed to explain the emergence of life on earth, or human life specifically.  This is enough to lead some people to atheism.  But to me it seems if your belief in God is dependent on ignorance regarding the origins of life, your faith is rather thin to begin with.  This really isn’t any sort of proof that God doesn’t exist.

There are quite a few books on this subject, and many more that relate to it, or utilize it in some way.  The two best books I have read on this subject are Emergence: From Chaos To Order by John H. Holland and Complexity: A Guided Tour by Melanie Mitchell.  (Complexity: A Guided Tour is really about the subject of complexity, obviously, but the concept of emergence is an important part of it.)

2. There are no secrets to success

If you understand the idea of emergence, this isn’t a big leap of logic: success, at least in terms of fame and money, is an emergent property.  The fame of a person or a person’s work emerges from thousands, or millions, or billions of human interactions that take place each day.

This is anti-intuitive because it’s just too complex to understand.  When something becomes popular, we want to know why, and we feel that we should have the ability to know.  So we analyze the work of art (and the perhaps the traits of the culture that made it popular) and try to pinpoint what factors must’ve made it popular.  We try to reverse engineer its success.

Ultimately, though, the system is just too complex.  There is no way to guarantee success.  There are no key factors.

And yet so many people want to analyze and analyze and analyze.  Why?

OK, this might not actually be very anti-intuitive to a lot of people.  But it implies something else, something that might be more anti-intuitive.  Eventual popularity is not inherent in anything, be it a person or a work of art or whatever.

What I mean by this is that people sometimes look at famous things and take it as an objective measurement of greatness, as if there’s something undetectable but inherent in the work that makes it have such widespread appeal.  However, by feeding into this, they are unknowingly becoming a part of the social system that makes the object famous in the first place.  For example, it’s easy to look at the popularity of Mozart’s music and claim that it’s popular because genius is simply inherent in it, even though we can’t identify what factors make it so genius.

This sort of thought has pervaded through cultures for centuries, and it’s wrong; it’s a complete misunderstanding of what exactly popularity is and how it comes about.  That is, more specifically, it’s a wrong guess about how it comes about.

You’ll notice that many of these ideas simply involve giving different, sometimes anti-intuitive (but more correct!), answers to the question “why?”

The best book about this sort of thing is The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb.  OK, it’s not exactly just about objective greatness and popularity in art; it deals with the bigger problem of induction in general.  But the two subjects are very related.

3. School is stupid

This really seems to get people riled up for some reason, I suppose because the idea of school being necessary is so embedded in our culture; we grew up with it and simply can’t imagine (perhaps even fear) a world without it.  It’s odd, because when people are young students, they usually fully agree that school is stupid.  But for some reason, as they get older, they change their minds.  Usually they’ll defend the necessity of school when they are no longer required to go themselves, as if that’s suddenly a more objective position from which to judge it?

Notice that I did not claim that education was stupid.  And I don’t doubt that school is about education; it just has an extremely inefficient and overall harmful way of educating.

There are quite a few reasons formal schooling is dumb, and I won’t go over all of them here (there are books on the subject, after all), but I’ll mention the big ones.

The biggest problem with school is the material taught.  School systems simply want to teach too much.  This comes from a misunderstanding of intelligence.  People seem to think (and I’ve blogged about this before if this is sounding familiar) that intelligence is merely about knowing stuff, and the more you know the better.  I suppose it’s a bit like the idea behind hoarding—it’s better to just keep everything you can in case you need to use it someday.  But hoarding makes it difficult to live, difficult to have room for the stuff you want later, etc.  True, memory doesn’t work quite like that, but the point should be obvious: knowledge that you don’t use is useless.  The time and effort spent acquiring it is wasted.

And people already know this, otherwise students would be taught to memorize phone books.  Some knowledge is clearly useless; it’s not like the concept of useless knowledge is simply foreign to educators.  They’re just bad at figuring out which knowledge is useful and which isn’t.  In fact, usually someone figures it out for them, and they’d rather not think about it or question it.  How many times is a teacher asked “When will I ever use this?” and the teacher replies something like “You’ll use it on the test!” or “You’ll use it on your homework!”?  It’s easy to say that a teacher who utters such words should be immediately fired, but the intellectual crime he is committing and that instance probably deserves worse.

Figuring out what knowledge is useful and what isn’t shouldn’t be a difficult feat, nor should it be up to the government or any collective institution to determine.  It’s very simple, you just ask yourself: will I use this knowledge?  If you are interested in the knowledge, then yes, of course, it’s automatically useful because it gives you pleasure.  If you need the knowledge to get something you want (like a job), then yes, it’s useful; you are going to use it to get something.  If it does not fit one of those categories, it is, at the moment, useless.  What if it will be useful later?  Then learn later!  That’s why people write books.  Books store knowledge.  You don’t have to know it until you need it!  Amazing, huh?!

But you might protest: “How will I know whether or not I need a piece of knowledge until I know it?”  Easy: if you find yourself asking yourself a question, then you need more knowledge.  You could be asking yourself a question because you’re just curious (“What’s the population of the USA?”), or you could have a specific goal in mind (“How do I play the piano?” or “Can I make this work I have to do easier somehow?”).

Then you must search for the answer.  It is (or should be) up to you to find it; you can’t (or shouldn’t) just sit back and hope someone will come along and tell you.

Sometimes the answer can be found through a simple search query in Google.  Sometimes you want a deeper understanding that a book can provide.  Sometimes you might need several books.  Sometimes you might be interested in talking to a professional.  Sometimes taking a well-designed school course in the subject is appropriate.

Sometimes no one really knows the answer, and you must figure out how to find it yourself (that’s why people do experiments) or get used to the disappointment of ignorance.  (We’ll never know how many hairs were on Thomas Jefferson’s head.  Too bad for us.)

The point is that you know beforehand that there’s some sort of knowledge you want to gain, and then you seek it.

You probably realize that public schools have the process almost completely backwards.  They teach (or try to teach) students things before the student has any use for them.  This is completely counterproductive.

There is only one case in which this is justified, and that is in the teaching of young children.  Children are too inexperienced to understand what they want to learn, or why they need to learn certain things.  Some things are hard to learn, and they might naturally resist.  Most parents would agree that children need to learn to use the toilet, to pack up their toys, to not throw things at the wall, to not hit their siblings, to eat their vegetables, to tie their shoes, to dress themselves, to act politely, to read, etc.  Adults naturally need to guide their children in learning these things, even if the children claim they don’t want to learn.

This is not the case with many subjects taught in school.  There is no reason to force-teach calculus, the phases of the moon, the date George Washington died, how to calculate torque, the names of the big rivers in California, etc.

How do adults figure out what should be force-taught and what doesn’t need to be?  Again, the answer is simple: do most adults use the knowledge on an everyday basis?  If not, then force-teaching anybody such knowledge is a waste of time.  (Note that just because most adults know a piece of knowledge does not make it useful.  Most adults could know that the USA has fifty states, but that does not imply that children need to be taught that specifically.  It’s not useful information; it’s just common sense trivia.  As with all common sense trivia, children will naturally pick it up eventually.)

(Sometimes people say: “I have very eclectic interests.  Sometimes I just read random books without searching for any specific answer.”  Well, that’s great; go for it.  But that’s not the same as subjecting yourself to a strict classroom setting, where tuition is paid, schedules are followed and tests and grades are given. In other words, this doesn’t justify anything; it’s irrelevant to the argument I’m making.)

So, from what I can tell, that is the biggest problem of our (the USA’s) current public education system.  I’ve met a lot of people who agree that public schools have problems, but they completely miss this point.  They argue for fewer grades, less work, better teachers, smaller classrooms, etc., but they uphold the belief that so much knowledge should be force-taught in the first place.  As long as so much is force-taught, schools will be flawed and wasteful.  You can’t solve any other problem without first answering: why are we teaching this in the first place?

The other problems do include the grading system.  While it provides numerical assessment, it is wrongly used as a motivator (“If you don’t do this, you’ll get a bad grade!”), punisher (“You got a low grade, so you must do more work!” or “You got a low grade, so no TV for a week!”), and comparing system (“Sean had the highest grade in the class, so he is the best!  No one else is as good as him!”).  All of these hinder the actual act of learning.  There are other ways to assess educational progress.  Note that if the knowledge is useless in the first place and the student knows it, there is no honest way to motivate the student to learn it.  This is an example why solving these smaller problems will not help if the previously mentioned bigger problem is not dealt with first.

Another problem is that schools are thought of as factories (they are “systems” after all).  Students go in ignorant and come out smart.  But in structuring it like a factory, students are treated like prisoners: they are split apart by age (what purpose does that serve?), they are required to sit as long as they are told, they need permission to use the bathroom, they all must work at a similar pace, they are all taught the same material at the same time, etc.

There are problems with teachers: they are underpaid (people who might be good teachers don’t become them), they cannot be fired easily, and they sometimes aren’t very good.

Creativity is not cultivated as well as it could be; it is sometimes considered a detriment.  Music programs are sometimes cut before math programs, for example.  Why is math considered inherently more important?

There are probably books on this, but I actually haven’t read any.  As I’ve said before, people, including authors, usually discuss the smaller problems, but don’t see or agree with the bigger one.

4. There is no such thing as a genius

There is such thing as one person having more skill than another.  However, the notion of “genius” comes from a human misunderstanding of where that skill comes from.  Sometimes a skill seems to come so easily to another person that we simply can’t attribute it to practice; therefore, we suppose, it must be innate, it must come from DNA, it must be a gift from God.

Sometimes intellectual fame is also considered an inevitable product of genius.  Mozart, Beethoven, Einstein, Newton, Edison, etc. are considered famous because their minds were special and the rest of the world just naturally recognized it.

But, as discussed in idea #2, their (and their works’) fame (“success”) is actually the product of our complex social interaction system.  That is, it’s an emergent property.  Mozart was not special.  Newton was not special.  Edison was not special.  Yes, they’re special in the sense that they’re famous, but they never had greater intellectual potential than anyone else.  Their status of fame is the result of both their hard work and luck. (By “luck” I don’t mean pure random chance; I simply mean it is an emergent property, a product of a system that is otherwise far too complex for us to understand.)

You, yes you, whoever you are, can play the piano and compose symphonies as well as Mozart.  But you have to put in the time, and a lot of it.  But it’s not beyond your mental abilities (though perhaps it’s beyond your time resources).  You can understand the theory of relativity, you can study quantum mechanics, you can paint a beautiful sunrise.  But you’ve got to put in a lot work and practice.  Sometimes it does seem like a skill comes to some people faster than others, but no one is ever just born with it.

Again, hard work won’t guarantee fame.  Since Mozart’s famous touring-as-a-prodigy childhood, there have been plenty of other parents of young pianists seeking the same kind of fame.  But fame was not just the product of Mozart’s skill; it was an emergent property. Mozart got lucky, not just in his time, but throughout history (at least to this day; nobody knows what people hundreds of years from now will think).

Making a breakthrough scientific discovery is a bit trickier.  Again, it comes down to luck.  We might like to think it comes down to natural genius, but once you come up with your discovery, it’s not as if you’ll be the only one who’ll ever be able to understand it (if that were the case, your discovery would be useless anyway).  It might take hard work to arrive at your theory, but there’s nothing you can do innately to guarantee that you make the discovery first.

There are a few books on this subject, such as The Genius in All of Us: New Insights into Genetics, Talent, and IQ by David Shenk and The Talent Code by Daniel Coyle.

5. There is no such thing as a teenager

Similar to the notion of a genius, a teenager is a purely cultural idea that emerged from a purely cultural way of raising children.  Biologically, after puberty, humans are ready to go out on their own and breed.  For some reason, culturally, we don’t accept this.  We might even think of it as disgusting and wrong for a thirteen or fourteen year old to get pregnant.  But that’s what the body is designed to do.  (Or perhaps I should say that that’s how the nature of the body emerged.) The reason it seems disgusting and wrong is cultural; we were raised in a culture that thinks of it as wrong and disgusting, so we accept the belief ourselves.

What is the basis for it?

Well, you could argue that teenagers are unruly and irresponsible.  But is it really biology that makes them that way?  I think yes and no; that is, biology indirectly makes them that way, and would make adults that way too if they were put in similar environments.  Biologically and psychologically teenagers are ready to take the reins of adulthood.  But they are not given those reins.  Parents, teachers, and lawmakers deny teenagers the reins for several more years, sometimes up to a decade longer than they should.  They exert control, sometimes giving them only more adult responsibilities without adult privileges.

The consequences of this should be apparent and predictable, and they’re exactly what we observe: teenagers resist.

Duh.

But then society makes the mistake of guessing that a teen’s troubles are due directly to biology and psychology; they conclude the teenager is in fact not ready to be treated like an adult, and the vicious cycle continues.

Does that mean parents of teenagers are bad?  Well, I wouldn’t say they’re evil.  After all, the belief is cultural; it’s natural and understandable that most parents would accept the common societal views of teenagerhood.  But they’re still wrong, and usually end up doing more harm than good.

Unfortunately this wrongness is even embedded in national law, so even if a parent wanted to treat their teenagers more like adults, there would be still be lawful limits on just how many privileges the teenagers could be given.

The best book I’ve read dealing with this subject is The Case Against Adolescence: Rediscovering the Adult in Every Teen by Robert Epstein.  However, there are still scientific papers and articles on the differences between the teenage brain and the adult brain that try to explain teenage rebellion, so this is still quite a controversial subject.

Conclusion

I hope that was interesting to some people out there!  I continue to see these ideas all over the place.  Emergence is everywhere and helps shape our world in complex (sometimes mysterious) ways.  The problem of induction leads people to false knowledge and a misunderstanding of the nature of fame and success.  Schools continue to waste so much time and effort, and the people trying to make it better often miss its main problem.  The cultural notion of genius encourages people to underestimate their own true abilities.  And what people think about teenagers leads to vicious endless cycles of strained relationships.

I was considering adding more ideas, such as compatibilism (the notion that free will and determinism are compatible) and Ayn Rand’s ideas on selfishness (I do recommend The Fountainhead and Atlas Shrugged to everyone), but they didn’t quite make the cut.  Maybe next time.

I’m tempted to think some people get too satisfied with their convictions; they naturally resist any sort of idea that might change how they see the world. I suppose they’re afraid that if they change their outlook, it implies they’re stupid. But the opposite is true. No one is born with perfect knowledge. In fact, you’re really not born with very much knowledge at all. Most of your current knowledge came from somewhere. Your convictions should be changing as you grow older. I’m not saying they have to completely reverse every few years (that would be awful and probably would imply your stupidity), I’m simply saying one should be open and honest with himself in his judgments. Changing your mind about something is not a sign of stupidity.

Problems with this non-fiction book and such

So I’m reading a book called The Talent Code by Daniel Coyle.  Overall, I’d say it’s a pretty good book, though sometimes a bit repetitive, as if the author just wanted to make the book longer, or make extra-sure he got his point across.  The book firstly argues that “genius” and “giftedness” and “skill” are not innate, people aren’t just born more special than everyone else (though we seem to like this idea in fiction).  Expert skill can be acquired by almost anyone who is willing to put in the enormous amounts of time and effort.  (Of course, this really isn’t a world-changing view; plenty of people, including my genius self, have already concluded this.  And, as I said in one of my earlier blog posts, The Talent Code feels like a sequel, or at least a companion book, to The Genius in All of Us. (By the way, I know these books might sound like cheesy self-help books, but I don’t think they’re that bad…))

The book also talks about the importance of the brain’s myelin.  (It mentions it over and over and over… yes, myelin, I get it!)  The book argues myelin, which insulates the axons of the brains neurons, plays a key role in developing skills.  Developing skills is, in fact, all about growing myelin around the proper neurons in your brain.  (OK, maybe not all about growing myelin, but its certainly a vital factor.)  But beyond that (and beyond repeating it 12 billion times), it really doesn’t go very in-depth about the science of myelin, nor does it talk about any ways to get more myelin, besides good practicing, which would be the obvious way to gain skills anyway.  So I’m really not sure why the author chose to make myelin such a big theme of the book.  Coyle could’ve talked about it for three or four pages and then moved on; it doesn’t seem to really add that much to his point.

The book also talks “deep practice” … that is, practicing that counts.  Just going through the motions does not provide the best learning experience, you have to sit and contemplate what you’re doing, mentally recognizing some mistake you keep making, some thing you can improve on, and consciously working on it.  (I’ve played some kids in chess, and some of them, after learning how the pieces move, just play the first moves that pop into their heads instead of taking the time think.  It seems useless to play like that; they’re never going to get any better without thinking.  I’d actually go so far as to say that there are these huge institutions which encourage (and spend millions of dollars on) “shallow practicing” … in these institutions, people just read some material, hear a lecture on it, take a test on it, and they’re done.  They never apply much of their knowledge to anything.  These institutions are the American high school system and the American college system.  (Plenty of exceptions of course, but overall, these institutions are centered around very stupid ways to learn useless things.))

I’ve just started the chapter “The Three Rules of Deep Practice” … can’t say much about it yet, ’cause I haven’t read it yet!  But it looks interesting.

Anyway, I came across some quotes from the book that I don’t quite agree with.  Overall, it’s an interesting book, and I’d say it’s “good” … but these quotes really annoy me.

On pages 49-50, Coyle writes:

A famed 1956 paper by psychologist George Miller, called “The Magical Number Seven, Plur or Minus Two,” established the rule that human short-term memory was limited to seven pieces of independent information (and gave Bell Telephone reason to settle on seven-digit phone numbers).

OK, this quote isn’t that annoying, but I wonder if this notion that “telephones numbers have seven digits because of short term memory studies” is just a myth; I’ve never seen any evidence of it, and the author here doesn’t cite anything. Is he just repeating something he read somewhere without checking up on it?

Even if this notion was true, it wouldn’t make much sense. The digits of a phone number are not “pieces of independent information” … you can remember a sequence of 12 or 15 digits (or plenty more) very easily if you use them enough; you remember them as a sequence, or maybe even as an image or one big chunk. And if the goal was to make phone numbers easy to remember, shorter is always better, so why not make it shorter? Or why not disregard number length completely and just use easy to remember sequences? For example, 11111 is easier to remember than 59834. You don’t have to actually remember 1, then 1, then 1, then 1, then 1. Instead you just remember “5 1’s” … so you could perhaps have sequences like 444-555-1. Then you just remember “3 4’s, 3 5’s, 1” … the 1 being the “end” symbol. Then we could have a ton of possible numbers with very little remembering to do.

I’m sure there are some problems with that system, but my point is that 7-digit phone numbers could just be a coincidence. I’m not convinced a huge amount of psychological thought went into choosing how many digits to make phone numbers; I think people just used what they were comfortable with. Maybe they did put a ton of thought into it and labored over scientific papers on short-term memory, but I haven’t seen any actual evidence of it, besides people mentioning it in passing when they talk about the “7 items in short term memory” thing.

Anyway, that’s just a small annoyance. A bigger annoyance is what Coyle writes next on page 50:

When one of Ericsson’s student volunteers memorized an eighty-digit number, the scientific establishment wasn’t sure what to think.

Ericsson showed that the existing model of short-term memory was wrong. Memory wasn’t like shoe size–it could be improved through training.

But I just read about this in The Genius in All of Us! Yes, these student volunteers learned to memorize huge sequences of random numbers, but did that really improve their short-term memory? Not necessarily. Give them random sequences of letters, or animal names, or DNA code, and they become normal again. They weren’t really “improving their short-term memory,” they were teaching themselves number-chunking skills. If you chunk 7 and 8 and think “seventy-eight,” 7 and 8 are no longer independent entities; you remember them as a group, one number. But what’s most striking is the non-transferability of these students’ memorization skills. Ultimately their skill is useless because we have very little need for memorizing large sets of numbers. But they don’t have the skill to memorize just vast amounts of anything on the fly. So I’m not sure I really buy the notion that “the existing model of short-term memory was wrong.” Maybe it was, but Ericsson’s study is not direct evidence of that, as far as I can tell.

(On a side note, transferability is a huge topic in psychology and education. It’s easy to look at a really good piano player and notice other things he does well and reckon “ah, playing the piano helps your math skills” or whatever. Maybe it does in some amount, but people forget that correlation does not prove causation. You cannot see such cause-and-effect in the complexity of human behaviour so completely just with passive observation. Yet schools (and people trying to sell educational material) do this all the time. “Playing chess will help your logic reasoning!” “Listening to Mozart will improve your math skills!” etc. (Again, not that it doesn’t, but it’s much more complex than just playing chess and suddenly applying logic in more places. Transferability of skills is simply not so simple. (It would be interesting to read a big scientific book on the subject, but I’m not sure if it’s been written. I’ll have to look around.)))

This next annoyance isn’t really Coyle’s fault since he’s just quoting someone else. On page 66:

“Why do teenagers make bad decisions?” he [George Bartzokis] asks, not waiting for an answer “Because all the neurons are there, but they are not fully insulated. Until the whole circuit is insulated, that circuit, although capable, will not be instantly available to alter impulsive behavior as it’s happening. Teens understand right and wrong, but it takes them time to figure it out.”

*Sigh* … more teenage brain bias based on no evidence. Firstly, this doesn’t explain teens who made no more bad decisions than adults, like, gee, I don’t know, me. Nor does it explain adults who make worse decisions than teens, or pre-teens who make better decisions (as they would also have less myelin). Secondly, there doesn’t seem to be any actual science behind it. OK, we know there’s myelin, we know it helps, we know teens have less of it (in general, at least, though I’m not even sure how much evidence of this there even is), but, as usual, correlation doesn’t prove causation. You can’t just say “Ah, teens have less myelin, therefore that is the cause of their bad decisions! Makes sense to me! And I’ve seen teens make bad decisions, so it must be true!” It seems it’s just old people generalizing teenage behaviour and assuming little can be done about it, it’s just innate, and must be countered with parental control. It’s quite sad and disturbing and ultra-annoying.

Then, on page 67, Coyle quotes Bartzokis as saying:

“Sure, you can teach a monkey to communicate at the level of a three-year-old, but beyond that, they are using the equivalent of copper wires.”

Er … if you read up on the science of monkeys learning language, I’ve yet to see any convincing evidence that monkeys are even close to learning language at a three-year-old level. Mr. Bartzokis’s credibility, like the list of Gandalf’s and Elrond’s allies after the betrayal of Saruman, grows thin.

Anyway, there are some quotes from this book that I like (as I said, overall, I think this is a good book). For example, the author at times seems to recognize the complexity of human behaviour. Coyle talks about David Banks, “a Carnegie Mellon University statistician.” Banks realizes that geniuses (at least famous geniuses) tend to appear throughout history in clusters, not regular intervals. He wonders about why this is. He says that conventional wisdom might say that the certain cultures, certain political environments, certain cultural wealth, etc., all make the environment perfect for nurturing geniuses. Banks, however, does not see any strong correlations. So Coyle writes on page 63:

Banks’s paper neatly illustrates the endless cycle of tail-chasing that ensues when you apply traditional nature/nurture thinking to questions of talent. The more you try to distill the vast ocean of potential factors into a golden concentrate of uniqueness, the more you are nudged toward the seemingly inescapable conclusion that geniuses are simply born and that phenomena like the Renaissance were thus a product of blind luck. As historian Paul Johnson writes, giving voice to that theory, “Genius suddenly comes to life and speaks out of a vacuum, and then it is silent, equally mysteriously.”

See, isn’t that a good paragraph? Or am I just using confirmation bias? No, I think it’s a good paragraph.

On page 53, Coyle writes:

In the vast river of narratives that make up Western culture, most stories about talent are strikingly similar. They go like this: without warning, in the midst of ordinary, everyday life, a Kid from Nowhere appears. The Kid possesses a mysterious natural gift for painting / math / baseball / physics, and through the power of that gift, he changes his life and the lives of those around him.

That quote made me laugh, it seems pretty true, doesn’t it? In fact, how many stories in general, even if not involving a “genius” character, involve some main character (or set of characters) that is just more special than everyone else? And why is that? To feed our natural desire / daydreams to be that kind of person? Not that this is necessarily a bad thing; I enjoy reading those kinds of stories and have some novel plots like that. But we should also realize that the “specialness” of characters in stories is not like real life…

Consider Pixar’s awesome movie The Incredibles. (By the way, I talk to one of the animators from that movie every week, brag brag brag, ha ha!) Firstly, the movie centers around characters who are definitely more special than everyone else… they have super powers after all. When you imagine yourself in that movie, would you imagine yourself being a regular non-powerful person? Maybe a non-super friend who learns their secret but is happy to keep it with them? Probably not. (Disney channel shows love doing that, giving one or a few characters special abilities and having their friends happily accept their side-kick roles.)

Anyway, there’s a part in The Incredibles in which Elastigirl (the mom) tells her son, Dash, that “everyone is special.” To which Dash replies “which is another way of saying no one is.” Beyond that the movie doesn’t really resolve the issue. Very quotable. How I resolve it: Yep, it’s true. Yep, sorry. No one is special. Everyone is. Live with it. What, Dash, you have to be more special than everyone else? Selfish conceded jerk!

Yet, in fiction, we don’t really live with it. We pretend it’s not true. We imagine stories of characters who really are more special than everyone else. The “chosen one” syndrome, as I might call it. I’m not sure why we do it, but we should at least recognize that we do. (Or maybe only I do since I am more special than everyone else.)

(Think about other exchanges Dash and Elastigirl could’ve had: “No one is special, Dash.” “Which is another way of saying everyone is!” or “The glass is half empty, Dash.” “Which is another way of saying the glass is half full!”)

(On a side note, Coyle also points out in the footnotes that the notion of the “Heroic Artist”–the genius artist that is more special than everyone else–may be a more recent phenomena in the course of human history, something that perhaps emerged in the Renaissance? Culture now supports the worshipping of geniuses of the past, putting them on pedestals: Shakespeare, Mozart, Beethoven, Rembrandt, da Vinci … such great works of art they produced! These people were not like us, they were geniuses high above us!)

OK, whew, didn’t mean for my post to get so long, but I think those are all the points I wanted to make today!

Some random things that I must say today

A few things…

OK, a few things.  Firstly, I finally updated my WordPress to 3.0!  Woohoo!  I’m all updated!  Not that one can really notice from just reading the blog…

Secondly, I created a new YouTube channel at youtube.com/seanhannifin to post random non-music stuff, probably mostly animation tests so that I can share my Animation Mentor progress.  Here are my first animation attempts:

Woohoo!

Um… what else?

Comic-Con

I don’t really know much about Comic-Con, except that it’s apparently a pretty popular event. I don’t have the time or money to go to any such conventions (or the social connections that would make going to such an event more fun). Anyway, Comic-Con will be streaming live at MySpace starting sometime today, so I might check it out for about 5 minutes…

A few responses to Nurture Shock

I’m reading this book called NurtureShock: New Thinking About Children by Po Bronson & Ashley Merryman.  It’s a very interesting book; each chapter is dedicated to shedding new light and giving a new perspective to a certain topic.  (Just look at the table of contents on Amazon if you really care what those topics are… I might blog about more of them in the future.)  I love books that try to tackle long-standing myths.

Anyway, chapter seven is called The Science of Teenage Rebellion, and while it doesn’t go into too much depth (afterall, you could write entire books on this topic … and people have), it does make some interesting points.

This post is really not about those points, though.  It’s really just my reaction to some quotes from the chapter.

On Page 140, it says:

Pushing a teen into rebellion by having too many rules was a sort of statistical myth.  “That actually doesn’t happen,” remarked Darling.  She found that most rules-heavy parents don’t actually enforce them.  “It’s too much work,” says Darling.  “It’s a lot harder to enforce three rules than to set twenty rules.”  These teens avoided rebellious direct conflict and just snuck around behind their parents’ backs.

Woah.  So, just lying to your parents and breaking the rules behind their backs is not rebellious?  You think the parents would be OK with that?  So… it’s good to set rules as a parent, because, hey, if it’s too many, your child will just break them behind your back…?

I think it’s possible for a parent to set too many rules, and not enough rules, and doing either could help cause rebellion.  And by “rebellion” I mean teenagers disobeying their parents, not just avoiding direct conflict.

That paragraph makes it hard for me to understand what the author is trying to say, so I can’t really agree or disagree with him on it.

Then, in a new section, on page 141, the book says:

The Mod Squad study did confirm Linda Caldwell’s hypothesis that teens turn to drinking and drugs because they’re bored in their free time.

Woah again!  The book says pretty much nothing about how this was confirmed.  It is seems way too simplistic to me.  What about the many environmental influences?  Peer pressure, parental pressure, school pressure, the availability of drugs and alcohol, etc?  I’m not convinced anyone ever does anything just because they’re bored.  There’s always more to it than that.  If you were really bored, you wouldn’t do anything!

The book then talks about how Caldwell creates a program called TimeWise which tries to help kids counter boredom.  And it says on page 143:

For the seventh-graders who started out the most bored, “it didn’t seem to make a difference,” said Caldwell.  It turns out that teaching kids not to be bored is really hard–even for the best program in the country.

Why didn’t TimeWise have a stronger effect?

My guess would be that after TimeWise, kids are thrust back into the environment they were in before.  Yes, their time spent in the TimeWise program could affect their choices a bit, but they didn’t drink and do drugs just because of mere boredom in the first place!  You got your premises wrong.  (The real results Mod Squad study might’ve been more complex than this, I don’t know.  As I said, the book gives no explanation as to how the study confirmed such a thing.)

And then, bum bum bum… the book says:

Is it possible that teens are just neurologically prone to boredom?

According to the work of neuroscientist Dr. Adrian Galvan at UCLA, there’s good reason to think so.

To me, there seems bad reason to think so.  Basically, scientists do these brain scans and watch parts of the brain light up.  And for teens, they find that the prefrontal cortex doesn’t light up as much when the teen is supposedly excited (it shows a “diminished response whenever their reward center was experiencing intense excitement”).  And the prefrontal cortex is “responsible for weighing risk and consequences.”  Therefore when the teen is excited… “the teen’s brain is handicapped in its ability to gauge risk and foresee consequences.”

That’s it?

That’s the evidence?

The prefrontal cortex shows a “diminished response” and therefore teens aren’t as good at foreseeing outcomes and are therefore just naturally prone to risky behaviour?

And nevermind the environment?

And… weren’t you at first trying to say something about boredom?

Overall, it’s a very interesting book.  I think the authors need to do a bit more research in this area though.

Yet even more long blatheryness about consciousness

My family and I are off to see the musical Wicked tomorrow.  Should be fun.  It will be the closest to thing to a vacation I’ve gotten and will get for a while, methinks.

The rest of this long blathery post will be yet some more thoughts I think I thought while reading Consciousness Explained by Daniel C. Dennett.

Funny little story

Here’s just a funny little story from page 59 of Conscious Explained by Daniel C. Dennett:

A neurosurgeon once told me about operating on the brain of a young man with epilepsy … [he was] making sure that the parts tentatively to be removed were not absolutely vital by stimulating them electrically and asking the patient what he experienced … one spot produced a delighted response from the patient: “It’s ‘Outta Get Me’ by Guns N’ Roses, my favorite heavy metal band!”

I asked the neurosurgeon if he had asked the patient to sing or hum along with the music, since it would be fascinating to learn how ‘high fidelity’ the provoked memory was.  Would it be in exactly the same key and tempo as the record? … The surgeon hadn’t asked the patient to sing along.  “Why not?” I asked, and he replied: “I hate rock music!”

Later in the conversation the neurosurgeon happened to remark that he was going to have to operate again on the same young man, and I expressed the hope that he would just check to see if he could restimulate the rock music, and this time ask the fellow to sing along.  “I can’t do it,” replied the neurosurgeon, “since I cut out that part.”  “It was part of the epileptic focus?” I asked, and he replied, “No, I already told you — I hate rock music!”

I wonder if I could make everyone in the world love my music and hate other people’s music by operating on their brains?  I wonder if I could also religiously convert them too, so that they will all think I’m a god.  But, of course, I believe that would be morally wrong, so I would have to operate on my own brain first.  Then I will believe it to be right.

Ha ha ha!

On page 62, Dennett writes:

There is a species of primate in South America, more gregarious than most other mammals, with a curious behavior.  The members of this species often gather in groups, large and small, and in the course of their mutual chattering, under a wide variety of circumstances, they are induced to engage in bouts of involuntary, convulsive respiration, a sort of loud, helpless, mutually reinforcing group panting that sometimes is so severe as to incapacitate them.  Far from being aversive, however, these attacks seem to be sought out by most members of the species, some of whom even appear to be addicted to them.

When I realized he was talking about humans and our habit of laughing, I could not help but engage in involuntary convulsive respiration myself.  When you laugh at the thought of how strange laughter is, you can create an internal infinite laugh loop.

Thoughts on the whyness of things and such

On page 64, Dennett writes:

We can give a perfectly sound biological account of why there should be pain and pain-behavior … what we want is a similarly anchored account of why there should be hilarity and laughter.

I think one has to be careful in asking “why?” because it can mean two different things.  There’s the cause-and-effect why and the purpose why.  For example, if I ask “why does the heart pump blood?” you could either answer “to get blood to other parts of the body, duh” (purpose why) or “because the brain tells it to, duh” (cause-effect why).

The thing is, purpose why applies only to human actions (and perhaps animal actions); consciousness and planning create purpose why.  Nature works only with cause-effect why.  But we tend to project a purpose why understanding of the world sometimes, especially on things like evolution and living systems.  Why do we have hands?  Not to grab things; nature doesn’t know anything, and it doesn’t care about grabbing.  You could argue that being able to grab things has provided an evolutionary advantage.  OK, but that still doesn’t answer how hands came to be.  Before creatures could grab things, nature didn’t say “it would be nice to have a body part that could grab things!”

Ultimately I think the reason we have hands, the reason we laugh, the reason we cry, feel pain, etc., all lie in the complexity of DNA replication over many millions of years (and the effect of having physical advantages (which is not to say that all elements of the human body have some evolutionary advantage; I doubt they do; why only one thumb, for instance?  There’s no advantage to having only one thumb)), and since that system is too complex for us to understand at the moment (and there are things about it we may never be able to fully know anyway, like the entire DNA structures of all of our ancestors), we might as well say that it’s random, that there is no reason.

All that said, asking [the right kind of] why might still help us learn something, but we should realize that it might be something we can never know.  Dennett might call this “defeatist thinking” … but oh well.  (Oh well?  More defeatist thinking!)

Knowing thyself

On page 67, Dennett writes:

Perhaps we are fooling ourselves about the high reliability of introspection, our personal powers of self-observation of our own conscious mind. … We are either “infallible” — always guaranteed to be right — or at least “incorrigible” — right or wrong, no one else could correct us.

This reminds me of a post I wrote a long while ago in which I blathered about why I hated being a teenager.  (It has nothing to do with a “maturing brain” and everything to do with society and parents trying to continue to maintain power and control over “teens,” which is a pretty new word/concept in the scope of human history.)  If you read the comments, someone says:

Though I can’t say I agree with the phrase “That’s why” in cases like this… “That’s what made me moody and depressed” — I really don’t think anyone has the authority on how their responses work to stimuli. If you’re on that level, you ought to be able to supersede them and establish control over your mind; however, I think that inability to control goes hand in hand with deficit understanding.

To which I responded:

Yikes! But then, who does? Does anyone? Shouldn’t I be the authority on how I feel, if I speak for myself at least? Can’t I know what’s making me miserable?

Now, I’d still defend the notion that teens being forced to do things makes them miserable. I think it makes just about everyone miserable.  Would parents in their 30s or 40s really want to trade places with their teens? I think not (though some might not admit it). But then, how many teens would agree with me? What are the reasons teens give for being so “moody”? The world is stupid and no one understands them?

So, I still agree with myself on the issue of “the myth of the teen brain” (and the myth that there even is a “teen” stage of psychological development), but I also agree that in many circumstances (uh… except this one) we should be cautious of thinking we can understand why we feel what we feel.

In fact, I think this is kind of exploited in works of fiction like the show House, when a character might say something like “I’m trying to help you!” and House will say something like “no, you don’t care about me, you just feel guilty about about what you said to Chase” or some other psychological twist that sheds new light everyone’s motivations, which is one of the reasons the show is fun to watch… the characters’ true motivations for everything is almost always in question (OK, maybe not always, but still).

How well can we truly understand our own motivations and causes of our feelings and our own thought processes and whatever? How are we to know?

On a side note, I’ve always thought it not only useless, but also a bit dangerous to too deeply psychoanalyze yourself (or believe someone else’s psychoanalysis of you). You’re probably likely to be wrong about yourself, and then acting on your own psychoanalytical conclusions, you may destroy yourself even further whilst thinking you’re helping yourself.

Though maybe I’m just saying that because I’m uncomfortable being too self-conscious… oh wait, oops, I was psychoanalyzing myself there…

But, really, if someone tried to convince me that they knew how their own mind worked, and what their subconscious desires were, I’d think “oh brother” and not believe them. Unless they agree with me on the teenager issue, of course.

That’s all folks

OK, is that enough?  I think so.  I kind of rambled, and I’m not sure I’ll fully agree with everything I said a few days from now, but writing all this helped the spare time go by today at work, and it made me feel as if I was doing something useful with that spare time, even though you can probably tell that that was not case.

Teenagerhood and YA books

I came across this blog post a few days ago by Shaun Duke I believe: Young Adult Fiction Can’t Win.

I can’t really respond to Shaun because I’m not really sure what he’s saying.  The post mainly made me want to go off on a tangent… what is YA fiction?  Why is it needed?  I think it’s a stupid idea in the first place!

There might be plenty of definitions, but the one that makes the most immediate sense to me is: YA fiction is fiction in which the main character is a YA, a teenager.

Some might argue that the nature of a story’s conflict also makes YA fiction what it is; the plot must deal with teenager issues.  But such a definition makes me cringe.  What in the world is a “teenage issue”?  (To be perfectly honest, I hate the notion of there being a “teenager” stage in the course of human development at all.)

My own teenagerhood

Maybe I just had a very fortunate adolescence, but in high school and college I was more of an introvert (am and always will be really), and tended to hang out with people who shared my interests and were right around as “nerdy” as me.  I never wanted to be popular or look cool or attractive, and that never made me feel lonely.  I never had any peer pressure to do any drugs or drink any alcohol or do anything risky or stupid.  The world of relationship woes is still another world to me.

That said, I still hated adolescence.  But it wasn’t because of drugs or relationships.  It was because of SCHOOL.  School was a lot of hard work that I still believe was mostly absolutely meaningless.  Society just thrusted upon us because that’s the tradition.  It gave me a lot of unnecessary worry and stress, and took away a lot of time that I would have loved using in more useful ways.  I was not and could not be in control of my life, and that’s what made me angry and moody and depressed.  It had nothing to do with “coming of age” or dealing with drugs or relationships or a “changing brain” that people are now claiming teenagers have.  It was just plain old not being in control.

And the only way out of it was to just get through school.

(I still get extremely angry just thinking about how the generations before me could allow something as dismal and pointless (and harmful and depressing) as the current high school educational system to emerge and sustain!  What complete buffoons!)

Still, I’m 23 years old now, and I don’t think anything magically changed within me from when I was 15 or 16 or 17.  Of course, I have learned more about certain things… I can drive a car much better now, I think I can write music and literature better, I can program in Java better, blah blah blah, but nothing has drastically changed inside.  I never “came of age” or learned some mystical truth that made me pass from “teen” to “adult” … I just got through school.

So maybe I didn’t have the normal “teen” experience?  Did I miss something?  What do teenagers really want?  For me, it was just control and freedom.  For others, is it popularity?  Wanting to feel loved?  Wanting this-or-that person to be your boyfriend/girlfriend?  If so, then yeah, I did (and hopefully always will) miss out on suffering over those things, but I don’t think those are just “teenage” issues, those are life issues that all must learn to deal with; there are plenty of adults who still struggle with those things.

Even “being in control” is really a life issue, but getting older and out of school tends to solve it.  (Though never completely!)

Some confirmation bias

I came across this article about an adolescent Bill Gates which stated:

The battles reached a climax at dinner one night when Bill Gates was around 12. Over the table, he shouted at his mother, in what today he describes as “utter, total sarcastic, smart-ass kid rudeness.”

That’s when Mr. Gates Sr., in a rare blast of temper, threw the glass of water in his son’s face.

He and Mary brought their son to a therapist. “I’m at war with my parents over who is in control,” Bill Gates recalls telling the counselor. Reporting back, the counselor told his parents that their son would ultimately win the battle for independence, and their best course of action was to ease up on him.

Aha!  See?!  Told you so.  It’s about control.  This Bill Gates anecdote proves it!

Conclusion

When I was a teenager, I didn’t care about the age of the protagonist, and I didn’t read fiction to commiserate with a fictional character.  (Not entirely, at least; I guess it’s more about trying to understand your own struggles in different ways, so I don’t mean to say that fictional characters shouldn’t deal with real-world issues.  They should.)  Nor did I much care for the notion of being “written down to” … the notion that there was some adult who could “understand me” and impart wisdom.  One of the first things you learn when you’re a teenager is that adults actually aren’t always all that wise.  (The wise ones will be the first to admit that.)

So I think the whole idea of YA fiction is just a stupid emergent property from this whole “teen culture” that’s been created by a society that infantilizes and seeks control over their youth for far too long, and it’s really not needed at all.  (Or at least the need has been artificially created.)  Teenagers can enjoy any book they want, and I wouldn’t mind it if the YA market vanished completely.  Books with adolescent main characters could of course still be written, and it’s probably only natural that younger folks would be more attracted to those stories, but those books don’t have to be an entirely different subset.  We don’t have “twenty-ish fiction” … fiction about adults in their twenties for adults in their twenties.  Likewise with “thirty-ish fiction” or “senior fiction” … but those stories are still out there.  Every main character has an age.

Eh… so there’s my rant.

By the way, check out Robert Epstein’s book The Case Against Adolescence: Rediscovering the Adult in Every Teen.  Not sure he’d necessarily agree with my opinions, but it was some more confirmation bias for me when I first came across it.

Also, here’s a Wikipedia article on what confirmation bias is, in case you’re curious!