Subscribe to
Posts
Comments

I had both CNN and Twitter on yesterday all afternoon, looking for news about the Boston Marathon bombings. I have not done a rigorous analysis (nor will I, nor have I ever), but it felt to me that Twitter put forward more and more varied claims about the situation, and reacted faster to misstatements. CNN plodded along, but didn’t feel more reliable overall. This seems predictable given the unfiltered (or post-filtered) nature of Twitter.

But Twitter also ran into some scaling problems for me yesterday. I follow about 500 people on Twitter, which gives my stream a pace and variety that I find helpful on a normal day. But yesterday afternoon, the stream roared by, and approached filter failure. A couple of changes would help:

First, let us sort by most retweeted. When I’m in my “home stream,” let me choose a frequency of tweets so that the scrolling doesn’t become unwatchable; use the frequency to determine the threshold for the number of retweets required. (Alternatively: simply highlight highly re-tweeted tweets.)

Second, let us mute based on hashtag or by user. Some Twitter cascades I just don’t care about. For example, I don’t want to hear play-by-plays of the World Series, and I know that many of the people who follow me get seriously annoyed when I suddenly am tweeting twice a minute during a presidential debate. So let us temporarily suppress tweet streams we don’t care about.

It is a lesson of the Web that as services scale up, they need to provide more and more ways of filtering. Twitter had “follow” as an initial filter, and users then came up with hashtags as a second filter. It’s time for a new round as Twitter becomes an essential part of our news ecosystem.

Steve Coll has a good piece in the New Yorker about the importance of Al Qaeda as a brand:

…as long as there are bands of violent Islamic radicals anywhere in the world who find it attractive to call themselves Al Qaeda, a formal state of war may exist between Al Qaeda and America. The Hundred Years War could seem a brief skirmish in comparison.

This is a different category of issue than the oft-criticized “war on terror,” which is a war against a tactic, not against an enemy. The war against Al Qaeda implies that there is a structurally unified enemy organization. How do you declare victory against a group that refuses to enforce its trademark?

In this, the war against Al Qaeda (which is quite preferable to a war against terror — and I think Steve agrees) is similar to the war on cancer. Cancer is not a single disease and the various things we call cancer are unlikely to have a single cause and thus are unlikely to have a single cure (or so I have been told). While this line of thinking would seem to reinforce politicians’ referring to terrorism as a “cancer,” the same applies to dessert. Each of these terms probably does have a single identifying characteristic, which means they are not classic examples of Wittgensteinian family resemblances: all terrorism involves a non-state attack that aims at terrifying the civilian population, all cancers involve “unregulated cell growth” [thank you Wikipedia!], and all desserts are designed primarily for taste not nutrition and are intended to end a meal. In fact, the war on Al Qaeda is actually more like the war on dessert than like the war on cancer, because just as there will always be some terrorist group that takes up the Al Qaeda name, there will always be some boundary-pushing chef who declares that beefy jerky or glazed ham cubes are the new dessert. You can’t defeat an enemy that can just rebrand itself.

I think that Steve Coll comes to the wrong conclusion, however. He ends his piece this way:

Yet the empirical case for a worldwide state of war against a corporeal thing called Al Qaeda looks increasingly threadbare. A war against a name is a war in name only.

I agree with the first sentence, but I draw two different conclusions. First, this has little bearing on how we actually respond to terrorism. The thinking that has us attacking terrorist groups (and at times their family gatherings) around the world is not made threadbare by the misnomer “war against Al Qaeda.” Second, isn’t it empirically obvious that a war against a name is not a war in name only?

A New Yorker article that profiles John Quijada, the inventor of a language (and a double-dotter!), mentions the first artificial language we know about, Lingua Ignota. The article’s author, Joshua Foer, tells us it was invented by Hildegard von Bingen (totally fun to say out loud) in the 12th century. “All that remains of her language is a short passage and a dictionary of a thousand and twelve words listed in hierarchical order, from the most important (Aigonz, God) to the least (Cauiz, cricket).” There’s more about Lingua Ignota over at our friend, Wikipedia. (And did you remember to kick in a few bucks to keep Wikipedia in booze and cigarettes?)

Ordering a list by cosmic importance (remember the Great Chain of Being?) makes sense if everyone agrees on what that order is. And it expresses respect for the order. That’s why some clergyfolk objected to the fact that Diderot’s Encyclopedia in the 18th century alphabetized its contents. Imagine Cows coming before God!

Before we sneer, we should keep in mind that we do the same thing when we make lists to be seen by others. For example, lists of donors put the Big Money folk first. For another example, we wouldn’t post a list of New Year’s resolutions in the following order:

My New Year’s Resolutions

  1. Floss regularly.

  2. Bring in an apple instead of snacking from the vending machine

  3. Don’t let the ironing back up for more than a week

  4. Quit meth.

  5. Refill the bird-feeder before it’s empty.

  6. Get those birthday cards in the mail on time!

And there are rhetorical rules for the order in which we give reasons to support an argument. For example, we often give the easiest reason to accept first, and lead up to the most serious reason: “It’s easy, it’ll save money, people will feel good about it, and it’s the right thing to do.” The phrase “most important,….” is not permitted to appear in the middle of a sentence.

Order is content.

There’s a knowingly ridiculous thread at Reddit at the moment: Which world leader would win if pitted against other leaders in a fight to the death.

The title is a straightline begging for punchlines. And it is a funny thread. Yet, I found it shockingly informative. The shock comes from realizing just how poorly informed I am.

My first reaction to the title was “Putin, duh!” That just shows you what I know. From the thread I learned that Joseph Kabila (Congo) and Boyko Borisov (Bulgaria) would kick Putin’s ass. Not to mention that Jigme Khesar Namgyel Wangchuck (Bhutan), who would win on good looks.

Now, when I say that this thread is “shockingly informative,” I don’t mean that it gives sufficient or even relevant information about the leaders it discusses. After all, it focuses on their personal combat skills. Rather, it is an interesting example of the haphazard way information spreads when that spreading is participatory. So, we are unlikely to have sent around the Wikipedia article on Kabila or Borisov simply because we all should know about the people leading the nations of the world. Further, while there is more information about world leaders available than ever in human history, it is distributed across a huge mass of content from which we are free to pick and choose. That’s disappointing at the least and disastrous at its worst.

On the other hand, information is now passed around if it is made interesting, sometimes in jokey, demeaning ways, like an article that steers us toward beefcake (although the president of Ireland does make it up quite high in the Reddit thread). The information that gets propagated through this system is thus spotty and incomplete. It only becomes an occasion for serendipity if it is interesting, not simply because it’s worthwhile. But even jokey, demeaning posts can and should have links for those whose interest is piqued.

So, two unspectacular conclusions.

First, in our despair over the diminishing of a shared knowledge-base of important information, we should not ignore the off-kilter ways in which some worthwhile information does actually propagate through the system. Indeed, it is a system designed to propagate that which is off-kilter enough to be interesting. Not all of that “news,” however, is about water-skiing cats. Just most.

Second, we need to continue to have the discussion about whether there is in fact a shared news/knowledge-base that can be gathered and disseminated, whether there ever was, whether our populations ever actually came close to living up to that ideal, the price we paid for having a canon of news and knowledge, and whether the networking of knowledge opens up any positive possibilities for dealing with news and knowledge at scale. For example, perhaps a network is well-informed if it has experts on hand who can explain events at depth (and in interesting ways) on demand, rather than assuming that everyone has to be a little bit expert at everything.

[2b2k][eim] Over my head

I’m not sure how I came into possession of a copy of The Indexer, a publication by the Society of Indexers, but I thoroughly enjoyed it despite not being a professional indexer. Or, more exactly, because I’m not a professional indexer. It brings me joy to watch experts operate at levels far above me.

The issue of The Indexer I happen to have — Vol. 30, No,. 1, March 2012 — focuses on digital trends, with several articles on the Semantic Web and XML-based indexes as well as several on broad trends in digital reading and digital books, and on graphical visualizations of digital indexes. All good.

I also enjoyed a recurring feature: Indexes reviewed. This aggregates snippets of book reviews that mention the quality of the indexes. Among the positive reviews, the Sunday Telegraph thinks that for the book My Dear Hugh, “the indexer had a better understanding of the book than the editor himself.” That’s certainly going on someone’s resumé!

I’m not sure why I enjoy works of expertise in fields I know little about. It’s true that I know a little about indexing because I’ve written about the organization of digital information, and even a little about indexing. And I have a lot of interest in the questions about the future of digital books that happen to be discussed in this particular issue of The Indexer. That enables me to make more sense of the journal than might otherwise be the case. But even so, what I enjoy most are the discussions of topics that exhibit the professionals’ deep involvement in their craft.

But I think what I enjoy most of all is the discovery that something as seemingly simple as generating an index turns out to be indefinitely deep. There are endless technical issues, but also fathomless questions of principle. There’s even indexer humor. For example, one of the index reviews notes that Craig Brown’s The Lost Diaries “gives references with deadpan precision (‘Greer, Germaine: condemns Queen, 13-14…condemns pineapple, 70…condemns fat, thin and medium sized women, 93…condemns kangaroos,122′).”

As I’ve said before, everything is interesting if observed at the right level of detail.

Paul Deschner and I had a fascinating conversation yesterday with Jeffrey Wallman, head of the Tibetan Buddhist Resource Center about perhaps getting his group’s metadata to interoperate with the library metadata we’ve been gathering. The TBRC has a fantastic collection of Tibetan books. So we were talking about the schemas we use — a schema being the set of slots you create for the data you capture. For example, if you’re gathering information about books, you’d have a schema that has slots for title, author, date, publisher, etc. Depending on your needs, you might also include slots for whether there are color illustrations, is the original cover still on it, and has anyone underlined any passages. It turns out that the Tibetan concept of a book is quite a bit different than the West’s, which raises interesting questions about how to capture and express that data in ways that can be useful mashed up.


But it was when we moved on to talking about our author schemas that Jeffrey listed one type of metadata that I would never, ever have thought to include in a schema: reincarnation. It is important for Tibetans to know that Author A is a reincarnation of Author B. And I can see why that would be a crucial bit of information.


So, let this be a lesson: attempts to anticipate all metadata needs are destined to be surprised, sometimes delightfully.

The American Psychiatric Association has approved its new manual of diagnoses — Diagnostic and Statistical Manual of Mental Disorders — after five years of controversy [nytimes].

For example, it has removed Aspberger’s as a diagnosis, lumping it in with autism, but it has split out hoarding from the more general category of obsessive-compulsive disorder. Lumping and splitting are the two most basic activities of cataloguers and indexers. There are theoretical and practical reasons for sometimes lumping things together and sometimes splitting them, but they also characterize personalities. Some of us are lumpers, and some of us are splitters. And all of us are a bit of each at various times.

The DSM runs into the problems faced by all attempts to classify a field. Attempts to come up with a single classification for a complex domain try to impose an impossible order:

First, there is rarely (ever?) universal agreement about how to divvy up a domain. There are genuine disagreements about which principles of organization ought to be used, and how they apply. Then there are the Lumper vs. the Splitter personalities.

Second, there are political and economic motivations for dividing up the world in particular ways.

Third, taxonomies are tools. There is no one right way to divide up the world, just as there is no one way to cut a piece of plywood and no one right thing to say about the world. It depends what you’re trying to do. DSM has conflicting purposes. For one thing, it affects treatment. For example, the NY Times article notes that the change in the classification of bipolar disease “could ‘medicalize’ frequent temper tantrums,” and during the many years in which the DSM classified homosexuality as a syndrome, therapists were encouraged to treat it as a disease. But that’s not all the DSM is for. It also guides insurance payments, and it affects research.

Given this, do we need the DSM? Maybe for insurance purposes. But not as a statement of where nature’s joints are. In fact, it’s not clear to me that we even need it as a single source to define terms for common reference. After all, biologists don’t agree about how to classify species, but that science seems to be doing just fine. The Encyclopedia of Life takes a really useful approach: each species gets a page, but the site provides multiple taxonomies so that biologists don’t have to agree on how to lump and split all the forms of life on the planet.

If we do need a single diagnostic taxonomy, DSM is making progress in its methodology. It has more publicly entered the fray of argument, it has tried to respond to current thinking, and it is now going to be updated continuously, rather than every 5 years. All to the good.

But the rest of its problems are intrinsic to its very existence. We may need it for some purposes, but it is never going to be fully right…because tools are useful, not true.

I’m at the Semantic Technology & Business conference in NYC. Matthew Degel, Senior Vice President and Chief Architect at Viacom Media Networks is talking about “Modeling Media and the Content Supply Chain Using Semantic Technologies.” [NOTE: Liveblogging. Getting things wrong. Mangling words. Missing points. Over- and under-emphasizing the wrong things. Not running a spellpchecker. You are warned!]

Matthew says that the problem is that we’re “drowning in data but starved for information” Tere is a “thirst for asset-centric views.” And of course, Viacom needs to “more deeply integrate how property rights attach to assets.” And everything has to be natively local, all around the world.

Viacom has to model the content supply chain in a holistic way. So, how to structure the data? To answer, they need to know what the questions are. Data always has some structure. The question is how volatile those structures are. [I missed about 5 mins m-- had to duck out.]

He shows an asset tree, “relating things that are different yet the same,” with SpongeBob as his example: TV series, characters, the talent, the movie, consumer products, etc. Stations are not allowed to air a commercial with the voice actor behind Spoongey, Tom Kenney, during the showing of the SpongeBob show, so they need to intersect those datasets. Likewise, the video clip you see on your setup box’s guide is separate from, but related to, the original. For doing all this, Viacom is relying on inferences: A prime time version of a Jersey Shore episode, which has had the bad language censored out of it, is a version of the full episode, which is part of the series which has licensing contracts within various geographies, etc. From this Viacom can infer that the censored episode is shown in some geography under some licensing agreements, etc.

“We’ve tried to take a realistic approach to this.” As excited as they are about the promise, “we haven’t dived in with a huge amount of resources.” They’re solving immediate problems. They began by making diagrams of all of the apps and technologies. It was a mess. So, they extracted and encoded into a triplestore all the info in the diagram. Then they overlaid the DR data. [I don't know what DR stands for. I'm guessing the D stands for Digital, and the R might be Resource]] Further mapping showed that some apps that they weren’t paying much attention to were actually critical to multiple systems. They did an ontology graph as a London Underground map. [By the way, Gombrich has a wonderful history and appreciation of those maps in Art and Representation, I believe.]

What’s worked? They’re focusing on where they’re going, not where they’ve been. This has let them “jettison a lot of intellectual baggage” so that they can model business processes “in a much cleaner and effective way.” Also, OWL has provided a rich modeling language for expressing their Enterprise Information Model.

What hasn’t worked?

  • “The toolsets really aren’t quite there yet.” He says that based on the conversations he’s had to today, he doesn’t think anyone disagrees with him.

  • Also, the modeling tools presume you already know the technology and the approach. Also, the query tools presume you have a user at a keyboard rather than as a backend of a Web service capable of handling sufficient volume. For example, he’d like “Crystal Reports for SPARQL,” as an example of a usable tool.

  • Visualization tools are focused on interactive use. You pick a class and see the relationships, etc. But if you want to see a traditional ERD diagram, you can’t.

  • Also, the modeling tools present a “forward-bias.” E.g., there are tools for turning schemas into ontologies, but not for turning ontologies into a reference model for schema.

Matthew makes some predictions:

  • They will develop into robust tools

  • Semantic tech will enable queries such as “Show me all Madonna interviews where she sings, where the footage has not been previously shown, and where we have the license to distribute it on the Web in Australia in Dec.”

(Here’s a version of the text of a submission I just made to BoingBong through their “Submitterator”)

Harvard University has today put into the public domain (CC0) full bibliographic information about virtually all the 12M works in its 73 libraries. This is (I believe) the largest and most comprehensive such contribution. The metadata, in the standard MARC21 format, is available for bulk download from Harvard. The University also provided the data to the Digital Public Library of America’s prototype platform for programmatic access via an API. The aim is to make rich data about this cultural heritage openly available to the Web ecosystem so that developers can innovate, and so that other sites can draw upon it.

This is part of Harvard’s new Open Metadata policy which is VERY COOL.

Speaking for myself (see disclosure), I think this is a big deal. Library metadata has been jammed up by licenses and fear. Not only does this make accessible a very high percentage of the most consulted library items, I hope it will help break the floodgates.

(Disclosures: 1. I work in the Harvard Library and have been a very minor player in this process. The credit goes to the Harvard Library’s leaders and the Office of Scholarly Communication, who made this happen. Also: Robin Wendler. (next day:) Also, John Palfrey who initiated this entire thing. 2. I am the interim head of the DPLA prototype platform development team. So, yeah, I’m conflicted out the wazoo on this. But my wazoo and all the rest of me is very very happy today.)

Finally, note that Harvard asks that you respect community norms, including attributing the source of the metadata as appropriate. This holds as well for the data that comes from the OCLC, which is a valuable part of this collection.

Google has announced that it is retiring Needlebase, a service it acquired with its ITA purchase. That’s too bad! Needlebase is a very cool tool. (It’s staying up until June 1 so you can download any work you’ve done there.)

Needlebase is a browser-based tool that creates a merged, cleaned, de-duped database from databases. Then you can create a variety of user-happy outputs. There are some examples here.

Google says it’s evaluating whether Needlebase can be threaded into its other offerings.

« Prev - Next »