Feeds:
Posts
Comments

Archive for the ‘Useless Bullshit’ Category

One of the arguments in favor of affirmative action is that the pool of talented individuals is large enough to accommodate fairly wide variations in how one defines “the best” and still get a good leadership cadre, freshman class, etc. Put differently, if Harvard, instead of admitting the 5.9 percent that they do admit, admitted the next 5.9 percent — the ones who “just missed the cut,” they would still probably be fine.

With the release of the Pitchfork “People Power” list, Jody Rosen at Slate has skewered Pitchfork’s readership for its selection of mostly white, overwhelmingly male, indie rockers. At the very least, it’s boring and predictable. Unsurprisingly, all the albums on the Top Ten got good reviews in Pitchfork.

So, OK, what happened? One is that women didn’t make a lot of lists, but I think that may have to do with a distaste for the kind of listmaking mania that often captivates music nerds and snobs. (See, for example, High Fidelity‘s “Top Five” obsession.) Additionally, there may be less consensus on female artists than on male ones, and the nature of averaging out lists ends up yielding fewer women. (This may be giving Pitchforkers too much credit.) Similarly, there may simply be fewer female artists regularly making music; there are probably a variety of reasons for that, but if we were to take a random sample of rock bands, I bet we would find a low rate of female participation. This may be the result of choice, prejudice, or some combination thereof, but it probably exists nonetheless. This is purely a hypothesis, of course, so no evidence exists one way or the other.

Because all list-making is arbitrary by nature, I’m going to pick an alternate canon of Top Ten albums that could theoretically have been in Pitchfork’s Top Ten (that is to say, they fit within the Pitchfork ethos, got good Pitchfork reviews, and are listened to by mostly indie rock nerds), but that represent a more female list. Much like those next 5.9 percent of Harvard rejects, this is a set of albums that I think Pitchforkers could reasonably say are as good as any of the albums in the Top Ten. I am generally a fan of quotas, because I think people don’t embrace diversity in almost any setting unless they are forced to. So here goes:

  1. Lauryn Hill – Miseducation of Lauryn Hill
  2. Yeah Yeah Yeahs – Fever to Tell (It’s Blitz could probably go here too)
  3. M.I.A. – Arular
  4. Neko Case – Fox Confessor Brings the Flood
  5. Janelle Monae – The ArchAndroid
  6. Aimee Mann – Bachelor No. 2 (or, the last remains of the dodo)
  7. Fiona Apple – Extraordinary Machine
  8. Robyn – Body Talk
  9. PJ Harvey – Stories from the City, Stories from the Sea
  10. Loretta Lynn – Van Lear Rose

Other alternates: Any number of Sleater Kinney albums; I’m not a huge Bjork fan, but any number of Bjork albums could go too.

Because all lists are inherently arbitrary, without a requirement for some other characteristics than “what’s good,” a bunch of mostly white male rock nerds inevitably pick a bunch of mostly white male rock music to be “the best.” Take a look at any compendium of “best ever” albums lists and you’ll see the skew in effect.

But if that were my Top Ten list from the 1996-2011 time period, I’d be pretty happy.

Read Full Post »

With modern materials, motors, etc., humans have apparently achieved flight like birds, flappy wings and all.

I wonder often about path-dependency and determination, particularly when it comes to inventions and innovations. For example, if there had been more advanced fabrics and miniaturized motors in the early 1900s, would we have seen flappy flight before fixed wing flight? Or did we need fixed-wing flight to get miniaturized motors? It would seem impossible to imagine a world with flight, but without fixed-wing flight. Yet, is it really so implausible? Certainly experimenters at the time liked the idea. With different materials, who knows what they could have done?

When the term “thinking outside the box” is bandied about, I always wonder who among the human race will actually go out and do the crazy thing that everyone thinks is stupid. Apparently this guy. Also these dudes. Flying looks fantastic:

Read Full Post »

Because we have to keep up our lead in Useless Bullshit somehow.

Read Full Post »

It’s worth waiting until the very end, too.

Read Full Post »

It’s the human walk on water walking roller ball!*

You apparently seal yourself inside and then can walk on water. And once you start, one of two things will happen:

(1) The ball is not sealed well enough to prevent water from coming in, in which case you either drown or nearly drown (like the top reviewer says he did!). Or,

(2) The ball is sealed well enough to prevent water from coming in, but, kids, don’t have fun out there for too long, or else you’ll run out of oxygen and pass out!

How is this a thing?

*Yes, that’s the actual name of it.

Read Full Post »

I was thinking about buying a probe thermometer because I would like to better assess the doneness of my meat, so I went to Amazon to take a look. I typed in “probe thermometer” and Amazon returned a set of thermometers. They vary in price and functionality, but they all have one thing in common: 3.5 stars.

The other day, I decided I wanted to try a new Indian restaurant in my area other than the few I always frequent. Lo and behold, they all have 3.5 stars on Yelp.

It seems obvious that, given enough reviewers, customer reviews should tend to mellow out around, well, 3.5 stars. But there are notable exceptions (consider, for example, IMDB).

What is it about product/service reviews in particular that seems to promote the averaging out around 3.5 stars? Let’s take one heavily reviewed Amazon product — an excellent book that I recently read called The Art of Fielding (in my opinion, the best book ever written about baseball):

5 star: (100)
4 star: (34)
3 star: (35)
2 star: (29)
1 star: (50)

It’s an odd distribution to say the least, but it highlights perhaps the problem of people who write online reviews. They are overwhelmingly very high — I enjoyed this book so much that it warranted a review — or overwhelmingly low — this book was so bad that I decided to review it. Compare this with the reviews for the digital probe thermometers and the effect is similar:

5 star: (41)
4 star: (23)
3 star: (10)
2 star: (12)
1 star: (28)

Again, lots of 5 star reviews and lots of 1 star reviews. Again, though, this points to the kind of person willing to write a review for the product. In buying the product you already analyzed it and expected it to be worth your money. If you were extremely impressed or disappointed, you reviewed it. If you were meh, why bother reviewing?

Thus, all popular products inevitably end up in the meh bin.

Weirdly, this does not apply to non-book pop culture. For example, Adele’s 21 sports a shocking 4.5-star rating, but again, this has to do with the ability to sample the wares before you buy the product (same with movies). Before listening to an album, you have heard the songs enough to know whether you like it enough to buy it.

What’s unnerving about this tyranny of the 3.5-star review is that it then makes the customer reviews essentially worthless. The whole promise of crowd-sourced reviews was that they would remove the monopoly of product-reviewers and open everything up to the masses. Instead, the incentives for responses make it such that the reviews provide little to no value to the consumer.

Whether it’s recipes or Zadie Smith novels, the “pretty good” averaging out of reviews has hurt their ability to tell us much about the product we’re buying. In the end, we either end up trusting the qualitative reviews over the quantitative (a bad proposition, if you ask me) or we buy the product and hope for a generous return policy.

Just as with a lot of the new information-heavy world does not make our decisions any easier, these reviews are just more information without any understanding of what they really mean.

Read Full Post »

NEKKID GRANDMA!

I love everything about this clip. From the immediacy and certainty (and loudness) of the answer, to the agreement from the other contestent, to the extremely generous awarding of the answer.

I had to leave my lab for a good several minutes to prevent myself from exploding/crying/dying of laughter in front of my labmates.

Read Full Post »

The key changes when the guys start singing are really striking. This genre of video fascinates me, as it really highlights the combination of group participation and response to music, mass democracy, voyeurism, and exhibitionism that the Internet inhabits. That is to say, this is a piece of art that is entirely new and could not have ever existed previous to this moment in history.

People can complain about whether or not mashup is worthwhile, but it is at least novel.

Read Full Post »

Once your favorite team has been knocked out of contention (and this happens often to Cubs fans), sporting events continue to occur and sports fans continue to watch them. How do you choose which team to root for, particularly when you have no connection to either team?

So, I present to you my rooting hierarchy for NFL football, but be warned that it is still full of caveats and loopholes.

  1. My team (in this case, the Bears, still, although I accept that one can adopt a new home team after a three-year waiting period, shortened to two years if the team is cosmically bad).
  2. Teams with some regional connection or personal link (e.g. the Colts, as a result of my attendance of Indiana University).
  3. Teams that play interesting or creative football (e.g. the Saints or the 49ers)
  4. Big underdogs (e.g. the Rams)
  5. Teams that I basically don’t care about at all but respect for their general competence (e.g. the Falcons)
  6. Historical rival teams (e.g. the Packers)
  7. Teams with obnoxious fans (e.g. the Patriots, the Giants)
  8. Teams I dislike on principle (e.g. the Jets)
  9. Teams I hate profoundly for a specific usually time-sensitive reason (e.g. Tebowmania and/or the Steelers for Ben Roethlisberger)

Although this generally holds true, multiple attributes apply to identical teams. In that case, the lower rung wins out; i.e. the Broncos were big underdogs (#4), but unfortunately they had Tebowmania, which means that #9 applied and I actually rooted for the Patriots yesterday. Similarly, although the Giants would normally be in the competent teams category (#5), the obnoxiousness of their fans that I have observed in Connecticut puts them into category #7, meaning I have to root for the Packers (a strange outcome).

In some other sports, there’s a category between #2 and #3 for players I really like, but in football, there are just too many players.

So, in case you were wondering, that meant my rooting for the divisional series was Pats over Broncos, Saints over 49ers (but just… for being more creative), Packers over Giants, and Texans over Ravens.

Read Full Post »

So let’s say you are Pepsi Co. (remember, corporations are people so just run with it). Someone sues you because he claims that he opened a can of Mountain Dew, only to discover that there was a mouse inside it. You decide that the guy is full of it, and here’s how you know: if a mouse had really been bottled inside a can of Mountain Dew upon production, by the time the guy opened the can, the mouse would have dissolved! Or, more accurately, would have been “transformed into a jelly-like substance,” presumably not easily identifiable as a rodent of any sort.

What do you do? Do you pay a settlement out of court to make the problem go away and not have to pay your expensive lawyers? Or do you announce this to the world as your defense?

Pepsi Co. has chosen the latter. Interesting marketing call.

Read Full Post »

Older Posts »