Feeds:
Posts
Comments

Archive for the ‘Useless Bullshit’ Category

One of the arguments in favor of affirmative action is that the pool of talented individuals is large enough to accommodate fairly wide variations in how one defines “the best” and still get a good leadership cadre, freshman class, etc. Put differently, if Harvard, instead of admitting the 5.9 percent that they do admit, admitted the next 5.9 percent — the ones who “just missed the cut,” they would still probably be fine.

With the release of the Pitchfork “People Power” list, Jody Rosen at Slate has skewered Pitchfork’s readership for its selection of mostly white, overwhelmingly male, indie rockers. At the very least, it’s boring and predictable. Unsurprisingly, all the albums on the Top Ten got good reviews in Pitchfork.

So, OK, what happened? One is that women didn’t make a lot of lists, but I think that may have to do with a distaste for the kind of listmaking mania that often captivates music nerds and snobs. (See, for example, High Fidelity‘s “Top Five” obsession.) Additionally, there may be less consensus on female artists than on male ones, and the nature of averaging out lists ends up yielding fewer women. (This may be giving Pitchforkers too much credit.) Similarly, there may simply be fewer female artists regularly making music; there are probably a variety of reasons for that, but if we were to take a random sample of rock bands, I bet we would find a low rate of female participation. This may be the result of choice, prejudice, or some combination thereof, but it probably exists nonetheless. This is purely a hypothesis, of course, so no evidence exists one way or the other.

Because all list-making is arbitrary by nature, I’m going to pick an alternate canon of Top Ten albums that could theoretically have been in Pitchfork’s Top Ten (that is to say, they fit within the Pitchfork ethos, got good Pitchfork reviews, and are listened to by mostly indie rock nerds), but that represent a more female list. Much like those next 5.9 percent of Harvard rejects, this is a set of albums that I think Pitchforkers could reasonably say are as good as any of the albums in the Top Ten. I am generally a fan of quotas, because I think people don’t embrace diversity in almost any setting unless they are forced to. So here goes:

  1. Lauryn Hill – Miseducation of Lauryn Hill
  2. Yeah Yeah Yeahs – Fever to Tell (It’s Blitz could probably go here too)
  3. M.I.A. – Arular
  4. Neko Case – Fox Confessor Brings the Flood
  5. Janelle Monae – The ArchAndroid
  6. Aimee Mann – Bachelor No. 2 (or, the last remains of the dodo)
  7. Fiona Apple – Extraordinary Machine
  8. Robyn – Body Talk
  9. PJ Harvey – Stories from the City, Stories from the Sea
  10. Loretta Lynn – Van Lear Rose

Other alternates: Any number of Sleater Kinney albums; I’m not a huge Bjork fan, but any number of Bjork albums could go too.

Because all lists are inherently arbitrary, without a requirement for some other characteristics than “what’s good,” a bunch of mostly white male rock nerds inevitably pick a bunch of mostly white male rock music to be “the best.” Take a look at any compendium of “best ever” albums lists and you’ll see the skew in effect.

But if that were my Top Ten list from the 1996-2011 time period, I’d be pretty happy.

Read Full Post »

With modern materials, motors, etc., humans have apparently achieved flight like birds, flappy wings and all.

I wonder often about path-dependency and determination, particularly when it comes to inventions and innovations. For example, if there had been more advanced fabrics and miniaturized motors in the early 1900s, would we have seen flappy flight before fixed wing flight? Or did we need fixed-wing flight to get miniaturized motors? It would seem impossible to imagine a world with flight, but without fixed-wing flight. Yet, is it really so implausible? Certainly experimenters at the time liked the idea. With different materials, who knows what they could have done?

When the term “thinking outside the box” is bandied about, I always wonder who among the human race will actually go out and do the crazy thing that everyone thinks is stupid. Apparently this guy. Also these dudes. Flying looks fantastic:

Read Full Post »

Because we have to keep up our lead in Useless Bullshit somehow.

Read Full Post »

It’s worth waiting until the very end, too.

Read Full Post »

It’s the human walk on water walking roller ball!*

You apparently seal yourself inside and then can walk on water. And once you start, one of two things will happen:

(1) The ball is not sealed well enough to prevent water from coming in, in which case you either drown or nearly drown (like the top reviewer says he did!). Or,

(2) The ball is sealed well enough to prevent water from coming in, but, kids, don’t have fun out there for too long, or else you’ll run out of oxygen and pass out!

How is this a thing?

*Yes, that’s the actual name of it.

Read Full Post »

I was thinking about buying a probe thermometer because I would like to better assess the doneness of my meat, so I went to Amazon to take a look. I typed in “probe thermometer” and Amazon returned a set of thermometers. They vary in price and functionality, but they all have one thing in common: 3.5 stars.

The other day, I decided I wanted to try a new Indian restaurant in my area other than the few I always frequent. Lo and behold, they all have 3.5 stars on Yelp.

It seems obvious that, given enough reviewers, customer reviews should tend to mellow out around, well, 3.5 stars. But there are notable exceptions (consider, for example, IMDB).

What is it about product/service reviews in particular that seems to promote the averaging out around 3.5 stars? Let’s take one heavily reviewed Amazon product — an excellent book that I recently read called The Art of Fielding (in my opinion, the best book ever written about baseball):

5 star: (100)
4 star: (34)
3 star: (35)
2 star: (29)
1 star: (50)

It’s an odd distribution to say the least, but it highlights perhaps the problem of people who write online reviews. They are overwhelmingly very high — I enjoyed this book so much that it warranted a review — or overwhelmingly low — this book was so bad that I decided to review it. Compare this with the reviews for the digital probe thermometers and the effect is similar:

5 star: (41)
4 star: (23)
3 star: (10)
2 star: (12)
1 star: (28)

Again, lots of 5 star reviews and lots of 1 star reviews. Again, though, this points to the kind of person willing to write a review for the product. In buying the product you already analyzed it and expected it to be worth your money. If you were extremely impressed or disappointed, you reviewed it. If you were meh, why bother reviewing?

Thus, all popular products inevitably end up in the meh bin.

Weirdly, this does not apply to non-book pop culture. For example, Adele’s 21 sports a shocking 4.5-star rating, but again, this has to do with the ability to sample the wares before you buy the product (same with movies). Before listening to an album, you have heard the songs enough to know whether you like it enough to buy it.

What’s unnerving about this tyranny of the 3.5-star review is that it then makes the customer reviews essentially worthless. The whole promise of crowd-sourced reviews was that they would remove the monopoly of product-reviewers and open everything up to the masses. Instead, the incentives for responses make it such that the reviews provide little to no value to the consumer.

Whether it’s recipes or Zadie Smith novels, the “pretty good” averaging out of reviews has hurt their ability to tell us much about the product we’re buying. In the end, we either end up trusting the qualitative reviews over the quantitative (a bad proposition, if you ask me) or we buy the product and hope for a generous return policy.

Just as with a lot of the new information-heavy world does not make our decisions any easier, these reviews are just more information without any understanding of what they really mean.

Read Full Post »

NEKKID GRANDMA!

I love everything about this clip. From the immediacy and certainty (and loudness) of the answer, to the agreement from the other contestent, to the extremely generous awarding of the answer.

I had to leave my lab for a good several minutes to prevent myself from exploding/crying/dying of laughter in front of my labmates.

Read Full Post »

Older Posts »