Osama bin Laden's infodump is as long as the first Harry Potter book, and other revelations of a preliminary text analysis

Yesterday as of this writing (late May 2015), the U.S. Office of the Director of National Intelligence released 103 documents written by Osama bin Laden, captured during the raid on Abbotabad. Of course, my first thought was, "Great, a corpus!".

Bearing in mind that these are translations, here's my preliminary text analysis of the infodump. If you want to see how I did the analysis, check out this IPython notebook.

First of all, to get a taste of the flavor of the whole collection, here's some randomly generated text based on it using a Markov chain:


The secretary of Muslims, but it the new environments at your news, always to take statements from our command of the issue is no harm, or is preferable to convince any operation must take Hamzah arrives here, such expression without an exhortation and all the world. This should be using such as a documentary about Ibn ‘Abbas… In the list you and so that viewed it comes from God. We, thank you and those companions and reads and ready to live in Gaza? Connect them all that he returned to spread the hearts [of the women who may respond to hearing them. He is what it is established system. Usually, these conditions, even with it. He might take me to give support such that it in the path for an end is a dangerous and they are you wish I am just a brother to be upon him very important of the family and that rose up the frontiers area (Islamic Maghreb); so were the reality of those who follows this great hypocrite. I am blessed with his companions… Furthermore, To the world as in missing the tenth anniversary in Algeria more religious issues.

Now the word clouds. Everybody loves word clouds. Except for the people who hate word clouds because they're semi-quantitative at best. Well, you can't have everything.

Here's the word cloud of the vocabulary in all of the documents:


Unsurprisingly, OBL uses a lot of religious terms; 'god' is used so frequently it's in most of the top results when we look at bigrams (two-word phrases):



For the purists, there are bar graphs at the end of the post.

Here's the distribution of document length among the 103 letters/documents:


Most of the documents are short letters, but a couple of them are really long (about 20 pages, double spaced). The two long ones are A Letter to the Sunnah People in Syria and Letter to Shaykh Abu Abdallah dated 17 July 2010.

The total length of the correspondence is 74,908 words. Here's how that length compares to some well-known novels:




Osama bin Laden's correspondence is about as long as the first Harry Potter book (which you may know as Harry Potter and the Sorcerer's Stone on the left side of the pond).

I also compared the reading level of the (and I stress, translated) text using the Flesch-Kincaid formula, which must be taken with a grain of salt since it sometimes gives weird results, like scoring Shakespeare below children's books:

It's about as hard to read as Mark Twain, so it's got that going for it, which is nice.

I also did a quick topic modelling using the NMF algorithm, which determines which words best separate the 103 documents into similar groups, or clusters. Here are the results, in no particular order; the topics were named by me, based on nothing but intuition.

Topic 1, "feminine and prayerful": god, peace, dear, praise, sister, blessing, mercy, willing, letter, prayer

Topic 2, "addressing those in power":  al, shaykh, brother, abu, letter,  wa, mahmud, muhammad, god, informed

Topic 3: "family and god":  allah, brother, mercy, ask, wa,  al, praise, father, child, know

Topic 4: "jihad":  god, said, people, ha, crusader,  jihad, nation, war, ye, islam

Topic 5: "Arab spring":  revolution, people, muslim, regime, ummah,  egypt, opportunity, blood, ruler, wa
Again, if you want to see my methodology, look here. Finally here are the bar graphs corresponding to the word clouds at the top of the post (click to enlarge):

Word clouds made with Tagxedo


Percentage of women in European national legislatures, 2014


Slovenia, Serbia and FYR Macedonia were kind of a surprise to me. Spain, too, a bit, and Belarus. Hungary, you got some explaining to do. France, vous me d├ęsappointez aussi.

As it does in every possible measure of national success, of course, Scandinavia rocks.

Mostly, this was a chance to take my hex grid chloropleth of Europe out for a spin!

Most characteristic words in pro- and anti-feminist tweets

 Here are, based on my analysis (which I'll get to in a moment) clouds of the 40 words most characteristic of anti-feminist and pro-feminist tweets, respectively.


anti-feministpro-feminist

Word clouds my may be only semi-quantitative but they have other virtues, like recognizability and explorability. For the purists, there's a bar chart below.

I'll mostly talk about my results here; the full methodology is available on my other, nerdier blog, which links to all the code so you can reproduce this analysis yourself, if you so desire. (We call ourselves data scientists, and science is supposed to be reproducible, so I strongly believe I should empower you to reproduce my results if you want ... or improve on them!) Please also read the caveats I've put at the bottom of this post.

Full disclosure: I call myself a feminist. But I believe my only agenda is to elucidate the differences in vocabulary that always happen around controversial topics. As CPG Grey explains brilliantly, social networks of ideologically polarized groups like republicans and democrats or atheists and religious people mostly interact within the group, only rarely participating in a rapprochement or (more likely) flame war with the other side. This is fertile ground for divergent vocabulary, especially in this case when one group defines itself as opposed to the other (as if democrats called themselves non-republicans). I am not going into this project with a pro-feminist agenda, but of course I acknowledge I am biased. I worked hard to try to counter those biases, and I've made the code available for anyone to check my work. Feel free to disagree!

A brief (for me) description of the project: In January, I wrote a constantly running program that periodically searches the newest tweets for the terms 'feminism', 'feminist' or 'feminists' (and random intervals and random depth, potentially as often as 1500 tweets within 15 minutes), and collected almost 1,000,000 tweets up to April 2015. Then with five teammates (we won both the Data Science and the Natural Language Processing prizes at the Montreal Big Data Week Hackathon on April 19, 2015), We manually curated 1,000 tweets as anti-feminist, pro-feminist or neither (decidedly not an obvious process, read more about it here). We used machine learning to classify the other 390,000 tweets (after we eliminated retweets and duplicates, anything that required only clicking instead of typing), then used the log-likelihood keyness method to find which words (or punctuation marks, etc.) were overrepresented the most in each set.

And here are my observations:

1. Pro-feminists (PFs) tweet about feminism and feminist (adjective), anti-feminists (AFs) tweet about feminists, as a group.
Since they're search terms so at least one of those words was in every tweet, their absolute log-likelihood values are inflated so I left them out of the word clouds. However, the differences between them are valid, and instructive. (But see the caveats below) AFs seem to be more concerned with feminists as a collective noun (they tweetabout the people they oppose, not the movement or ideology), while PFs tweet about feminism or feminist (usually as an adjective).
2. PFs use first- and second-person pronouns, AFs use third-person pronouns
Similarly to #1 above, and inevitably when one group defines itself as not belonging to the other, AFs tweet about feminists as a plural group of other people, while feminists tweet about and among themselves. Note that in NLP, usually pronouns are so common they're considered "stopwords", and are eliminated from the analysis. But with 140-character tweets, I figured every word was chosen with a certain amount of care.
3. The groups use different linking words to define feminism
PFs talk about what feminism is for or about, why we need feminism, what feminism is and isn't, what feminists believe; AFs tweet about what feminists want, ask can someone explain why feminists engage in certain behaviors which they don't get, say feminists are too <insert adjective>, and often use the construction With <this, then that>.
4. PFs link to external content, AFs link to local and self-created content.
PFs link more in general to http content via other websites; AFs use the #gamergate hashtag, reference @meninisttweet,  and link to @youtube videos rather than traditional media (that term doesn't appear in the word cloud, but it has a log-likelihood of 444 in favor or AFs). AFs also reference their platform, Twitter, a lot; feminists don't, presumably because they're also interacting in other ways.
5. AFs use more punctuation
Besides "feminists", the number-one token for AFs was the question mark; they have a lot of questions for and about feminists, many of them rhetorical. The exclamation point wasn't far behind, followed by the quotation mark, both to quote and to show irony. PFs start tweets with '+' and "=" (usually as '==>') for emphasis. Rounding out the non-alphabetic characters, AFs use 2 as a shorter form of 'to' or 'too', while PFs link more often to listicles with 5 items.
6. AFs tweet more about feminist history.
Unsurprisingly, PFs tweet about their goals, equality and rights, and defend themselves against accusations of misandry. But it's the AFs who tweet about modern and third-wave feminism, displaying knowledge about the history of the movement.
7. PFs use more gender-related terms
This one is all PF: they reference gender, genders, sexes, men and women more than AFs.
8. AFs use more pejorative terms
AFs use fuck, hate, annoying and, unfortunately, rape a lot; they also use derisive terms like lol, the "face with tears of joy" emoji and smh (shaking my head, not in the top 40 but still a high log-likelihood value of 484).
Caveats:
  • Selection bias: the dataset does not include any tweets with pro- or anti-feminist sentiment that do not include the search terms 'feminist', 'feminists' or 'feminism'
  • Noise in the signal, part 1. It's difficult to analyze tweets for the underlying attitude (pro- or anti-feminism) of the author; it involves some mind-reading. We tried to mitigate this by using a "neither pro nor anti" category classifying tweets we had the slightest doubt of thusly. Of course, that just shifts the noise elsewhere, but hopefully keeps down the misclassifications between our two groups of interest, pro- and anti-
  • Noise in the signal, part 2. We used 1,000 tweets to predict the attitudes of 390,000 tweets. Obviously this is going to be an imperfect mapping of tweet to underlying attitude. This kind of analysis does not require anywhere near 100% accuracy (we got between 40% and 60%, depending on the metric, both of which are better than random choice, which would give 33%). The log-likelihood method is robust, and will tend to eliminate misclassified words. In other words, we may not be confident these top 40 words and tokens are the same top 40 words and tokens that would result if we manually curated all 390,000 tweets, but we are confident these top 40 words and tokens are significantly characteristic of the two groups we identified in our curated tweets.
  • If you have doubts as to my methods or results, great, that's what science is all about. Please feel free to analyze the code, the dataset, the manual curation, and the log-likelihood results linked to in my other blog.
  • It is not my goal to criticize or mock anti-feminists, and I hope I've kept my tone analytical. There's a Venn diagram between stuff feminists say (and of course they don't all say anywhere near the same thing), stuff anti-feminists say, and things I agree with, and it's not straightforward. What interested me here was the language. That said, I hope I've contributed a little bit to understanding the vocabulary surrounding the issue, and in general, I believe more knowledge is better than less knowledge.
Word clouds made with Tagxedo.

Baby Boom: An Excel Tutorial on Analyzing Large Data Sets

tl;dr: I wrote a data science tutorial for Excel for the good folks at Udemy: click here!


The usual progression I've seen in data science is the following:

  1. Start out learning data analysis with Microsoft Excel
  2. Switch to a more powerful analysis environment like R or Python
  3. Look down one's nose at everybody still using Excel
  4. Come to realize, hey, Excel's not so bad
I'll admit, I was stuck at Step 3 for a few weeks, but luckily I got most of my annoying pooh-poohing (if you're not a native English speaker, that expression might not mean what you think it means) out of my system decades ago when I was a proofreader (hence my nickname, if you were curious).

I think most mature data scientists see Excel as an essential and useful part of the ecosystem; I think the way it brings you so close to your raw data is essential in the early stages to develop data literacy, and later on when you're munging vectors and dataframes it can still be useful to fire up a .csv and have a look-see with no layers of abstraction above it.

Feedback is welcome. I'm not involved with the rest of the Excel course, but I have taken the Complete Web Developer course from Udemy and recommend it. I get absolutely no money for referrals or anything like that (or for page visits for my tutorial for that matter), so this is honest, cross my heart.


Dialogue plot of Star Trek: The Original Series

First, the plot. Hover over the points to see the character names.



Why Star Trek? Well, I'm working on an in-depth analysis of all of Shakespeare's plays, so I'm vetting my method on Star Trek because (a) the size of the corpus is much smaller so each step in development takes less time and (b) I'm, sadly, more immediately familiar with the minutiae of Star Trek because reruns were on every day after school when I was growing up, so I'm more able to notice trends and problems.

This isn't the finished product, but I thought it was interesting enough to warrant an interim blog post. All of the guest characters along the bottom appeared in one episode (except for a handful like those in both parts of The Menageries and Harcourt Fenton Mud who appeared in two). Trelane (if you're too young for TOS, he's sort of like a proto-Q from TNG) has the most dialogue per episode of any TOS character, guest or regular (if you've seen the episode, this will not surprise you). The super-speed Scalosian Queen Deela is the female character with the most dialogue; in fact, most of the high-dialogue guest stars are antagonists. Edith Keeler is the largest Kirk-love-interest part (ah, Joan Collins in the '60s); in general, Kirk was attracted to women due to the size of things other than their vocabularies, it seems (sorry, sorry, couldn't resist).

I tend to think of TOS as an ensemble drama, but Kirk is really the only regular with more dialogue than most of the main guest stars. Kirk and Spock are the only characters who appear in all 79 episodes (McCoy is missing from one... I challenge you to leave a comment below saying which episode that is). Uhura is in more episodes than the rest of the supporting cast, but speaks less ("Hailing frequencies open, Captain" is only four words, after all). Interestingly, Yeoman Janice Rand has more dialogue per episode than any supporting character except Scotty, but she's way down the vertical axis because she was fired after 15 episodes, either (a) because they'd exhausted her flirtiness potential with Kirk, (b) because she was showing up to work drunk, or (c) because she objected to being sexually assaulted by a TV executive, depending on the version of events.

Finally, the Enterprise computer voice has slightly more words per episode than Nurse Chapel; they were voiced and played, respectively, by the same actress, Majel Barrett, beloved of Trek fans and of series creator Gene Roddenberry.

I got the scripts from www.chakotaya.net; they appear to be fan-transcribed scripts (hey, in the '60s, that's all you could do. I myself made one in 1996 of my favorite X-Files episode, Jose Chung's From Outer Space). They're rather error-prone (as is to be expected), so if you want to see the gory details of how I cleaned them up and made the graph in Bokeh, check out this GitHub repo or go directly to this IPython notebook.

Visualizing 10 unusual causes of death in the CDC mortality database

Let me make two things clear right up front:
  1. The metrics I used to decide what causes of death are unusual are purely subjective, i.e. which of the thousands of causes I skimmed through caught my eye and made me go, "Huh."
  2. It is in no way my intention to make fun of anyone's death. I find these causes of death unusual, not amusing.
Also: I am a great believer in reproducible data science, so as always, I've made available everything anyone would need to reproduce (or extend!) my results in an IPython notebook (nbviewer version or faster-loading html version) and this GitHub repo folder.

The U.S. Centers for Disease Control maintains a data service called WONDER (Wide-ranging OnLine Data for Epidemiologic Research); among its databases is the Compressed Mortality File tracking underlying cause of death from 1968 to 2012.

The causes of death are taken from the International Classification of Diseases (which contains an enormous number of causes of death that are not what I would call diseases, such as being struck by a train). It went through revisions in 1979 and 1999, so the categories do not match up cleanly through every year. For example, after 1978 "transvestitism" is no longer listed as a possible cause of death. (I'm not making this up. There are no deaths attributed to transvestitism in this database, but it's there in the schema, so perhaps it was assigned to someone before 1968)

Tools used: Python (with pandas and plotly) and Photoshop.

1. Dental caries

If my dentist had told me cavities could result in death, I might have flossed more often. We can see the change in cause of death definitions, as 1999 and on has slightly different wording.


2. Weather or storm

Here we see even more clearly the divide between cause of death classifications. I don't know what they called these deaths before 1979, or what the big event was in 1980. I wonder about 2005; could it be Hurricane Katrina? The ICD-10 lists 'hurricanes' as a separate cause of death, but you always have to allow for human error in assigning these categories.


3. Migraine

A good friend of mine has her life encompassed by her migraines; I had no idea they could result in death. You can see that after the 1979 revision, increased knowledge of this condition led to parsing into further categories.


4. Spacecraft

This one I find somewhat puzzling. Since the database starts in 1968, we just miss the Apollo 1 fire the year before, but what about the seven deaths aboard the space shuttle Challenger in 1986?


5. Conjunctivitis

I had conjunctivitis, or 'pinkeye', at least once as a child, and it was no big deal, so I was bemused at the first season South Park episode in which the entire town is so afraid of pinkeye, they confuse it with zombification. (As an aside, this was the first time I'd ever heard of pirated media, as a coworker of mine downloaded it off Usenet in 1997.) Turns out it can be deadly, and there are many, many categories of conjunctivitis deaths. (The graph's defaults don't have enough different colors to differentiate them all, but I think the forest matters more than the trees here.)


6. Cleft palate or cleft lip

I find it encouraging that the death rate for this condition appears to have gone down. My dad grew up in the '50s with a girl with a "hare lip", as he called it, and hearing stories about it as a kid I felt so bad for her. Had I known it was a cause of death (and quite a more substantial one than many others on this list), my secondhand suffering would have been even worse.

7. Elbow

I tried to think of the most unlikely part of the body to result in death. Here it is. If you're wondering why there are six causes of death in the legend and four in the graph, it means two had no entries during the time period. Also, the cause 'Of elbow' means it was a subgroup of a supergroup that does not appear in the database I downloaded (I could have downloaded the supergroup fields, but I didn't, the file was huge enough already).

I'm assuming there was no outbreak of elbow deaths in the '80s and '90s, and that the higher bars are due to differences in criteria of classification. 'Enthesopathy' (a disorder of bone attachments) only appears in 1979, and its diagnosis drops down for all bones in 1999. If you're curious, you can see the graph in my gist notebook.


8. Animal

This one was a little tricky. There are 13 categories of vehicle occupants in collision with 'pedestrian or animal' to remove, and then I thought to check specific animals like dogs and bees (cat fanciers will be happy to know there is no category devoted to death by feline, and yes, bees are animals.)


9. Ingrown nail

I actually had a pretty badly infected ingrown toenail as a kid. Still, it appears my odds were pretty good, as there's less than two deaths per decade attributed to it.


10. War

You wouldn't think war would be an unusual cause of death, the world being what it is, but I find the low numbers attributed to it unusual. There's absolutely no increase when the Iraq war starts in 2003. Make of it what you will.


Most decade-specific words in Billboard popular song titles, 1890-2014


Chart first, then explanation (click to enlarge):
The inspiration for this post came from my being too lazy to set my iPod to shuffle, and then noticing it played a bunch of songs in a row from the 1930s and '40s that started with the letters "in" ("In the Wee Small Hours of the Morning," "In the Still of the Night", etc.) Naturally, being a data nerd, my first thought was to quantify the phenomenon.

The data comes not from Billboard itself, but from  www.bullfrogspond.com; I don't know much about the data source, but it certainly looks thorough and painstaking, and up to date. If you'd like to know a little more about my methodology (like a quick explanation of the metric, "keyness"), see the code I used and/or see the actual songs that correspond to these words, head on over to my other, nerdier blog, prooffreaderplus.

Observations about the results:
  • The 2010s seem both more vulgar ("hell" and "fuck") and more inclusive ("we" instead of the "you", "ya" and "u" of the 1990s and 2000s).
  • The 1990s and 2000s were the decades of neologisms, with "U", "Ya" and "Thang". "U" was so popular it occurred twice (but see the note on decade-binning on prooffreaderplus.)
  • Fun! Lots of the decades can be made into intelligible five-word sentences. For example: "Hell Yeah, We Die, Fuck!" (2010s). "Ya Breathe It Like U" (2000s), "You Get Up, U Thang" (1990s), "Don't Rock On Fire, Love" (1980s), "Sing, Moon, In A Swing" (1930s)
  • As anyone who listens to the radio in December knows, all the Christmas songs are oldies, and that shows in the results for the 1950s, with "Christmas" and "Red-nosed".
  • You can track genres with the keywords: "Rag" (1910s), "Blues" (1920s), "Swing" (1930s), "Boogie", "Polka" (1940s), "Mambo" (1950s), "Twist" (1960s), "Disco" (1970s), "Rock" (1970s and 1980s). After that, people realized you don't have to actually name the genre in the song title, people can figure it out by listening. (N'Sync must not have gotten that memo for 2001's "Pop".)
  • Who knew Billboard song rankings went back to the 1890s? It was a surprise to me. That fact, and the fact that there are fewer songs then, but not so few as to be negligible, influenced a lot of the choices into how I presented this data (read more here if you want). But those early decades seem to be more focused on first names ("Michael", "Reuben", "Casey"), familial relationships ("Uncle", "Mammy")
  • The first two decades -- the oldest ones compared to now -- both have the keyword "old". I blame time travel.
  • I find it interesting that there are short, common articles, adverbs, prepositions and pronouncs in the list; these have a higher bar for keyness, since they're present in other decades: "When" (1900s), "A" (1930s), "In" (1930s), "On" (1980s), "Up" (1990s), "It" (2000s)
Now if you'll excuse me, I'm going to hunt through my iPod to see if there's even one song with "gems" in the title; it seems to have been popular in the 1910s.