Subscribe to

Archive for the '1' Category

Shirky’s myth of complexity

Clay Shirky has given us a surprising number of Internet myths. And by this I mean not falsehoods but the opposite: Broad, illuminating ways of making sense of what’s going on. For example, Clay’s post about the power law distribution of links in the blogosphere (based on research by Cameron Marlow) changed how we view authority, fame, and success in the Web ecosystem, and provided the structure within which Chris Anderson could point to the Long Tail. And Clay’s Ontology Is Overrated made clear that a change in how we categorize our world affects very real power relationships; that essay was highly influential, including on my own Everything Is Miscellaneous.

Clay’s new post — The Collapse of Complex Business Models — gives us a broad way of understanding why those who used to provide us with content will not be the ones who give us content in the future…and why they cannot fathom why not.

Cory Doctorow in support of copyright

In this edition of Radio Berkman, Cory Doctorow argues in favor of copyright … the part of copyright that protects the rights of readers to own (and not just license) books.

It being Cory, the discussion covers topics such as the way in which books are like dogs and his sentimental attachment to his digital collection.

OMG. I disagree with Umberto Eco!

It makes me very nervous to disagree with Umberto Eco because he is so fathomlessly smart. But I think in this case I do. Sort of.

There’s a fabulous interview with Eco in Spiegel (in English) about why he loves lists. He is characteristically pithy, provocative and wise. A crucial paragraph, from the beginning:

The list is the origin of culture. It’s part of the history of art and literature. What does culture want? To make infinity comprehensible. It also wants to create order — not always, but often. And how, as a human being, does one face infinity? How does one attempt to grasp the incomprehensible? Through lists, through catalogs, through collections in museums and through encyclopedias and dictionaries. There is an allure to enumerating how many women Don Giovanni slept with: It was 2,063, at least according to Mozart’s librettist, Lorenzo da Ponte. We also have completely practical lists — the shopping list, the will, the menu — that are also cultural achievements in their own right.

I read the first sentence and was provoked, as Eco intends. Lists are the origin of culture? Please say more! But Eco doesn’t really explain, in this interview, why lists — as opposed to other forms of collections and orderings — are so important. The urge to make order, yes, but not lists themselves.

A list is one particular way of creating order. Lists are sequential and one-dimensional: Wines listed by year, or by place, or by ranking, or by the chronology of when you first encountered them. (Lists can be hierarchical, but they’re only lists if they can be resolved back down to the one-dimensional.) Lists thus are one elemental way of ordering the world. And they have a peculiar fascination, which Eco expresses beautifully. But I think it’s wrong to say that they’re the origin of culture. I think it’d be more accurate and useful to say that culture originates with collecting: Pulling things around us because of their appeal (a word I’m purposefully leaving vague).

I’m sure I’m making too much of Eco essentially drumming of interest in his exhibit at the Louvre, but the issue matters a little bit. I think (based on little to nothing) that lists emerged as a stripping down of multi-dimensional collections. Culture first happened (I imagine) when we pulled together pieces of the world that spoke to us in ways we could not articulate. We assembled them as spaces through which we could wander, or piles through which we could collectively sort (“Oooh, I particularly like that green shiny stone!”). Lists are an abstraction, and culture began (I suppose) with an unarticulated sense that some things go together — and perhaps our first conversations were about why.

Eco goes on to say many wonderful things about why we have liked lists, including proposing that listing properties of an object can liberate us from looking for the definitional essence of things. (For more on this, read his important book, Kant and the Platypus.) In fact, Eco suggests that a mother defines a tiger to her child “Probably by using a list of characteristics: The tiger is big, a cat, yellow, striped and strong.”

I have a bunch of issues with that.

First, that type of definition really just makes explicit what’s implicit in the traditional approach to definitions as essence. In the traditional Aristotelian approach, the essence is the creature’s spot in the hierarchy of beings. So, a tiger is a species of cat, and thus would be specified by its difference from other cats but also by all of the properties of the classes above it (mammal, vertebrate, animal, etc.). The essential definition and the list definition both consist of a list of properties, but the essential definition nests them so that they don’t all have to be spelled out, and so we can see which differences “count.” Eco says, “The essential definition is primitive compared with the list,” but it seems to me that a beautifully nested, hierarchical system of essential definitions is in fact more advanced — it requires abstraction and systems thinking — than a mere list.

But, I don’t want to miss Eco’s essential (so to speak) point here, which is that defining something with a list breaks us out of the notion that there is a single, knowable essence. Absolutely. There’s no eternal essence, “just” a set of properties that are relevant depending upon our circumstances. With that I wholeheartedly agree.

My second problem with this is that — as George Lakoff says in Women, Fire and Dangerous Things, explicating and expanding the work of Eleanor Rosch — the mother (heck, maybe even the father) probably actually teaches the child what a tiger is by pointing at one, or at a picture of one. We learn through prototypes, not through essential definitions, and not by making lists. List-making is an abstraction and a secondary activity.

Third, the listing the parent does seem to me to not have the properties that make lists captivating to Eco. The parent isn’t trying to give a complete listing that brings a sense of mastery over the infinite and over death. She’s just pointing out some of the salient features. If it is a list, it’s not a list of the sort that Eco has charmed us about.

Fourth, while lists of properties are a useful corrective to thinking that things are exhausted by a definition of their essence, lists strip out so much that they don’t seem like much more adequate than essential definitions. A tiger isn’t a list.

This is just a fun interview in Spiegel, so I may be taking it too seriously. So, even if lists occur within culture — including the lists in literature he points to — rather than being the origin of culture, the interview does indeed help us to see why our fascination with lists is a fascination with something bigger than lists.

Lego blocks unmiscellanized

Giles Turnbull at the Morning News reports on his research interrogating (gently) children from different families about what they call various Lego pieces. Quite interesting in its own taxonomic way, and a topic that’s amusing even just to contemplate.

How embarrassing

All the tagging and categorization info on this site seems to be gone. Poof! These are the categories and tags that would help you browse the site by topic.

Very embarrassing for a site about the power of tagging and categorization. The lesson: Metadata needs to be backed upas much as content does.

I didn’t. That’s what’s embarrassing.

The FCC has put up a site — — where anyone (after registering with a valid email address) can post an idea, or vote existing ideas up or down. I love the idea of the feds opening discussions up, although, I am not convinced that this particular implementation achieves its presumed aims. But, what the heck! Try-fail-try is the right rhythm for the Net.

The site defaults to listing the ideas reverse chronologically, which adds some serendipity, or you can choose to view them listed in order of popularity, which encourages piling on. You can also browse by category/tag.

Anyone who registers can post a comment. The comments are unthreaded, discouraging much development of ideas but also discouraging flaming. You can report a comment as being “abusive,” but otherwise cannot rate them.

At the moment, the most popular posting is from Tim Karr, who, according to his biography at, a site sponsored by, “oversees all Free Press campaigns and online outreach efforts, including” Tim — who I know a bit and like — is an activist. He has the most popular post at the FCC’s site presumably because sent out a mailing urging supporters to vote it up.

There’s absolutely nothing wrong with that. It’s how politics is played in this country. If an anti-NN group sponsored by, say, AT&T wanted to play the same game, it’s perfectly entitled to. It’s not hard to imagine a well-funded group swamping FreePress’s shoestring efforts and getting orders of magnitudes more people to thumbs-up an anti-NN comment.

Which is to say that an open discussion board like the one the FCC has posted can serve either of two purposes. It can be a place where people come for rational discussions across political positions, or it can serve as an informal poll of citizens’ sentiments about an issue. But combining the two means that neither works very well. It becomes simply an opportunity for gaming the system.

It seems to me that sites such as these cannot serve as a poll that has any value at all. Besides, we have lots of other ways of gauging public opinion, including scientific polling and elections. If, on the other hand, the FCC wants to sponsor a forum for useful discussion or to generate new ideas, it could modify the current implementation. For example — and these are just ideas that may turn out to be gigantic belly flops — comments could be divided into two tracks, pro and con, with most-popular listings for each. Readers could be allowed to vote up but not down. Comments could be threaded. The comments could be rated. Postings could have buttons for “agree/disagree” and “interesting,” so that the site could highlight articles that people disagree with but find interesting.

All of these techniques could be gamed because everything can be gamed. Some discussion boards do work, though. I don’t know what the magic keys are, but I’m pretty confident that a political discussion board that includes an overall popularity contest will so encourage gaming that its results will necessarily be unreliable. At the very least, the popularity contest should be confined to determining the best arguments for each side.

But I don’t want to close on a negative note, for the FCC is to be congratulated on its efforts to open its processes up not only to lobbyists and geeks who know how to walk and talk like an FCC commenter, but to the general public. And it’s doing so in the proper Webby way of taking small steps and not being afraid to fail in public. That takes guts.

There are two new-ish Radio Berkman interviews up: Me talking with Viktor Mayer-Schönberger about his book that argues that we are in danger of forgetting how to forget, and Russell Neuman on learning from the past of the media.

Harry Lewis has a terrific post about a $300 do-it-yourself book scanner he saw at the D is for Digitize conference on the Google Book settlement. The plans are available at, from Daniel Reetz, the inventor.

There are lots of personal uses for home-digitized books, so — I am definitely not a lawyer — I assume it’s legal to scan in your own books. But doesn’t that just seem silly if your friend or classmate has gone to the trouble of scanning in a book that you already own? Shouldn’t there be a site where we can note which books we’ve scanned in? Then, if we can prove that we’ve bought a book, why shouldn’t we be able to scarf up a copy another legitimate book owner has scanned in, instead of wasting all the time and pixels scanning in our own copy?

Isn’t Amazon among the places that: (a) knows for sure that we’ve bought a book, (b) has the facility to let users upload material such as scans, and (c) could let users get an as-is scan from a DIY-er if there is one available for the books they just bought?

Net uncovers new type of cloud

There are reports of a new type of cloud, one that is not currently in the official International Cloud Atlas. Or, possibly, it is a formation that’s been around forever, but the scattered reports are only now coalescing thanks to the Net.

According to Amazon’s review of Richard Hamblyn’s The Invention of Clouds, we only began thinking clouds could be categorized in 1802 when Luke Howard started giving public lectures. The very idea that clouds — the paradigm of uncatchable — could be divided into groups was (apparently) fascinating and thrilling. (Lamarck had also categorized clouds, but it didn’t catch on.)

A quick googly scan makes it seem that the cloud taxonomy is pretty messy. For example, the University of Illinois’ “cloud types” page lists four broad categories, and a list of miscellaneous clouds, each of which is categorized under one of the four basic types, evoking a “Huh?” reaction from at least one of us. The cloud taxonomy page at Univ. Missouri-Columbia lists eight types. Do you categorize by what they look like, how high they are, what they do (rain or not?), which celebrity profiles they resemble …? Categorizing clouds is truly a Borgesian task.

And, dammit, wouldn’t you know? Here’s a poem by Jorge Luis Borges called: “Clouds (II)” (with the line-endings probably removed):

Placid mountains meander through the air, or tragic cordilleras cast a pall, overshadowing the day. They are what we call clouds. And their shapes are often strange and rare. Shakespeare observed one once. It seemed to be a dragon. That one cloud of an afternoon still kindles in his words and blazes down, so that we go on seeing it today. What are the clouds? An architecture of chance? Perhaps they are the necessary things from which God weaves his vast imaginings, threads of a web of infinite expanse. Maybe the cloud is emptiness returning, just like the man who watches it this morning.

(translated by Richard Barnes. B; Robert Mezey; Richard Barnes. “Clouds (II). (poem).” The American Poetry Review. World Poetry, Inc. 1996. HighBeam Research. 11 Oct. 2009 v)

More Borges poems

Viktor Mayer-Schönberger is giving a talk at the Berkman Center (well, actually at Pound Hall) on his book Delete: The Virtue of Forgetting in the Digital Age. Viktor teaches at Singapore University, and was at the Kennedy School for ten years.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins with a story of person studying to become a teacher who was kicked out of school because the school noticed a photo of her drinking on Facebook. She tried deleting it, but the Internet remembered it. He gives another example: A person who noted in an article that he had taken LSD in the 1960s. When trying to cross into the US, an immigration officer refused him admittance because he hadn’t offered up that information, and the officer uncovered it by googling him. What’s put on the Web is never forgotten. In another example, the information was not put up by the individual but by someone else: a bar/club in Europe records all the people, all the drinks, etc., and hasn’t ever deleted any information. Likewise, Google knows more about us than we can remember.

For millennia, forgetting was easy, and remembering was hard, says Viktor. So, we’ve come up with ways to pass on our memories. The oral tradition. Painting. Writing. “But these tools have not altered the fundamental fact that for us humans, forgetting is easy, and remembering is time-consuming and expensive.” The book and the photo also haven’t altered this fact. What is long past fades in our mind. We depreciate what is no longer relevant. But because forgetting is biological, we never had to develop explicit strategies to forget. Now we’ve moved from biologically forgetting to permanent remembering. [Hmm. I haven’t. We still don’t remember much. But we have more records, and thus are able to retrieve more. That seems different to me.]

This has happened because storage is cheap in the digital world. Google has server farms with a capacity of 100,000 terabytes perhaps. And we’ve gotten much better at retrieving information. And we have global access. Remembering has become the default.

There are, of course, benefits to this, Viktor says. But undoing forgetting has deep consequences, far beyond the information efficiencies. He points to power and time.

Power: If others have info about us and can keep that info accessible for a very long time, the informational power increases, and can affect how we transact and interact. It’s Bentham’s Panopticon: behavioral compliance through the permanent threat of constant surveillance.

Time: Imagine Jane is about to catch up with her old friend John, but when reviewing their history of email, discovers msgs from a time when he was nasty to her. She had forgotten that time. Now it comes back. Her current relationship with John now is ruined. [Or, she discovers msgs that remind her she once loved him. Isn’t Viktor’s example actually an argument for more remembering, so she can see how she got over the bad time?] “In analog times, the dangers were limited” because our biology would have brought us to forget.

Viktor talks about AJ, a non-fictional woman who has difficulty forgetting. It is a weird and unhappy condition.[This is why the conflation of human remembering and the presence of a fairly complete digital record matters. The presence of digital info and the tools for retrieving it does not turn us into AJ.]

Without forgetting, we have trouble changing. We have trouble forgiving. We may turn into an unforgiving society. “This is the real danger of shifting the default from forgetting to remembering.” Worse, suppose we stop relying on our own memories and rely instead on the digital memories. “Does that give those who control digital memory the power over history?”

What to do? Perhaps give privacy rights to individuals. But there are weaknesses: It’s not politically feasible in the US. The European have those rights, but people have not used them.

Or perhaps we could create an information ecology, a regulatory construction of what can be remembered. E.g., it might require the deletion of info after a particular time. This does not require individuals to go to court for enforcement, and it protects against an unforeseen future as when the benign Dutch social services registry was repurposed by the Nazis to identify Jews. “It may be better to store less than more.” But, after 9/11, we’re seeing requirements for increasing data retention, Viktor notes.

So, maybe we need to augment these approaches. “Digital abstinence,” for example. Don’t put everything on Facebook. But abstinence isn’t all that reasonable, he says. By the end of 2007, two out of three young Americans had put their info online.

The opposite approach is “full contextualization.” E.g., Jane can’t find the context of her bad treatment by John. Full contextualization would restore that. But will that ever be technically feasible? And if it were, would it really address the challenge of digital remembering? Do we have time to relive our past again and again?

Another approach: Hope for a cognitive adjustment. That is, over time we’ll learn to devalue older info and learn to live with an omnipresent past. “That would solve our problem. But is it likely?” How long would it take us to change how we assess information? “Cognitive psychologists are very critical of our ability to change our decision making in the short run.” [But a change in norms can happen much faster than that, and we govern what we’re allowed to notice and remember through norms. Statements like “That’s water under the bridge” and “Youthful indiscretions” are expressions of norms that enforce social forgetting without requiring actual brain evolution.]

Or, we could change our technology, rather than changing ourselves. E.g., a global DRM system to protect privacy. Viktor is not recommending this: “Wouldn’t this be a perfect surveillance system?” And we’d have to make sure that privacy is built deep into the infrastructure.

None of these six solutions are sufficient, although all offer something.

“I advocate a revival of forgetting…to establish a mechanism that makes forgetting easy, and makes remembering just a bit more strenuous.” Just enough to shift the incentives back to what we humans are used to. Viktor suggests an expiry date for information. Whenever we save info, we should be prompted to put in a date when we want it deleted. We should be able to change those dates.

The core of this proposal isn’t the automatic deletion, he says. Rather, the prompting for the date will remind us humans that most information is not of permanent value.

E.g., search engines could offer us an easy way to say how long we should remember searches. Or people could carry a device on their keyring to set expiration dates, perhaps tagging the expiration dates for the images of the people in digital photos.

Any expiry date system must have only two characteristics. First, it must aim at changing the default from remembering back to forgetting. Second, it must remind us of information’s temporal nature.

Expiry dates are also no silver bullet, and don’t solve digital privacy problems, Viktor says. But they could be useful when used with some of the other proposed solutions.

“Forgetting is often forgotten…Let us remember to forget.”

Q: You don’t mention the propensity of all media to fade over time. Digital memory is not perfect. Also, data is growing so quickly that it gets too expensive to digitally remember everything. The amount of data is growing faster than Moore’s Law.
A: You don’t need much space to remember a billion queries a day. A couple of hundred dollars worth of data storage. And Google’s way of saving data is relatively future-proof.

Q: [me] If we take memory to mean only the human capacity, and digital “memory” to be more like what we usually call storage, then what has actually happened to human memory in the digital age?
A: I chose the term “digital memory” carefully. If I can’t access my VCR tapes easily, they’re pretty much useless to me. Digital stuff is so easily accessible. How has digital remembering changed human remembering? I don’t know. But my argument isn’t that it’s changed human remembering, but that it has changed the external stimuli affecting our memory.

Q: One of the way a culture forgets is that it lets books go out of print, get moved out of libraries, etc. Now we have Google Books, which will make all books ever printed available (pretty much). Do you see negative effects of this project?
A: I haven’t given it enough thought because authors would like to set their books’ expiry dates very far in the future. Some preliminary research we’re doing on court decisions are showing an interesting effect on memory.
Q: The author of the book isn’t the only one concerned with the info in it. There may be people written about who would want to a say…
A: Yes, and the author’s rights aren’t always fully owned by them.

Q: Digital memory has value as cultural memory. The things we’d put expiration dates on have value even if against the interests of the people at the time, because it has social and historic meaning…
A: That’s just conjecture…
Q: No it’s not. We’re leaving traces now all the time. How we put that info to use is a different question.
A: Suppose you’re an author. Shouldn’t you be able to put bad early stories into the trash bin? Why should society have the right to take it from you and preserve it and make it public?
Q: Great point, but we still do struggle with this. Nonetheless, I would recommend we give thought to how these things might sensibly be balanced. E.g., the Iran election twitter stream. Enormous amt of fascinating info has been lost.
A: The solution is built in. For certain contexts, we may be required to mandate a very long expiry date. We do that all the time. I’m arguing for keeping that as the exception to the rule.

Q: I’m a cultural historian, trained as a Medievalist. There’s data scarcity in that field. Who decides about inclusion, preservation, etc.? Institutions have performed the filtering role. Google keeps some types of info and not others. Others are interested in your social security number, etc. So, who are the gatekeepers? There’s power to the Internet Archive’s approach of capturing everything. The stuff that the institutions of memory don’t preserve may turn out to be the most interesting for historians. (I basically buy your core argument, although I’m a believer in the cognitive adjustment.)
A: Brewster Kale and I (of Internet Archive) are in general agreement. The Archive sets expiry dates. [Not sure I got that right. Sorry.] My core argument is to give back the choice to the individuals.

Q: I too believe in the cognitive adjustment because I see myself and others already doing that. Sure, you find old emails reminding of something you wanted to forget, but when you accidentally delete some years’ worth, you feel an intense sense of loss.
A: When I lost all my email at the end of 1998, I was completely horrified. But then I discovered it doesn’t really matter. I started out believing the cog adjustment argument, but after I read cog science books, I changed my mind. I want to plug The Seven Sins of Memory, which shows how hard it is to readjust.

Q: Suppose two of us in a shared record have different expiry preferences…
A: I talk about that a lot in the book.

Q: There’s a big diff between what I want to preserve and what others do. The European privacy laws require data deletion. Google and others are now negotiating with the European Commission about this …
A: We need to differentiate between privacy rights and norms.

[missed a couple of questions. sorry.]

Viktor says that he recognizes that expiry dates are a crude instrument. Too binary. “I’d prefer rusting or something like that.” :)

Next »