A Golden Age of Television?

ILLUSTRATION: Hippo Watching “The Mountain and the Viper” • watercolor on paper • 5×7″

We love living during Golden Ages. Try the ubiquitous Google autocomplete experiment, and type in “Golden Age of.” Then pass through the alphabet from “Athens” to “Zeppelins” to get a sense for just how many Golden Ages you’ve already lived through.[1] Thanks to the Golden Age of Hyperbole, we might even live in the Golden Age of Golden Ages.

Language being what it is, we also sometimes find ourselves in New Golden Ages —when we have been too quick to judge an earlier era’s merits—and Silver Ages—when we update the costumes of famous comic book characters—and Gilded Ages—a humorous and biting indictment of US society not incorrectly attributed to Mark Twain for once. Heck, we’ve had three New Gilded Ages in the last twenty years for those of us doomed to repeat history and bubbles and sit in traffic on the 101. But note: if you do find yourself in a Bronze Age or a Stone Age for a particular subject, it is incumbent on you to be nice and refer to it as “early innings.”

And so it’s been said that we live in a New Golden Age of Television (see here or here), but I think it’s remiss to do so as if this is merely a phenomenon of creativity, the logical necessity of talent and artistry progressing such that (legitimately wonderful) shows such as The Wire or Breaking Bad leap from the heads of their showrunning creators. Of course Steven Johnson is right when he talks about an evolution of TV complexity in Everything Bad is Good For You.[2] But befitting a Golden Age, you should do as Lester Freamon intones: Follow the money. “You don’t know where the **** it’s going to take you.” I think television content got better because a lot more money flowed into its creation, and that money got spent in more interesting ways.

Yes, over the course of the aughts, content quality fueled popularity which then fueled market growth and re-investment. But I’d argue that this era of television benefitted more from a short-term confluence of revenue growth from competing formats and competing windows, all while a fixed (or declining) amount of media time spent was fragmented across these myriad products. It’s the sustainability of these trends that we should worry about when we think of the art of scripted entertainment.


If you love TV, the sustainability of a particular kind of investment is worth discussing because even though artistic craft / progress is difficult for society or an institution to unlearn, the availability and distribution of investment can influence its flourishing. Overinvestment in one area can destroy a once vibrant art scene (Our Band Could Be Your Life-era underground -> Late-90s Grunge). A change in investment style can squander an artistic generation (00s theatrical investment in ‘franchises’ over spec scripts). And collapse can occasion the disappearance of knowledge or vibrancy. We lost the recipe for Roman concrete and Pantheon-like domes for ages after Rome’s fall, and the Renaissance sputtered to an end as the Medici Family and the Pope lost their spheres of influence. What, then, are our sources of TV patronage today?

From 2002 to 2013, US broadcast, local, and cable television advertising revenue grew from ~$58 billion to ~$74 billion. Some of that revenue flows back to content producers directly; some of it flows through local affiliate fees and syndication fees. During that same period, cable subscription revenue grew from ~$48 billion in 2002 to ~$105 billion. Revenue share flows back to broadcast and cable content producers through retransmission and carriage fees, respectively. The DVD market peaked in 2004 at $25 billion pre-Netflix, and TV box sets drove a large percentage of that. Speaking of the streaming giant, Netflix grew from zero to $5 billion in subscription revenue over this time (against an increasing amount of binge-friendly TV viewership) while YouTube grew similarly (but with only 10% going to traditional media after revenue share). You can throw in Hulu and other premium digital video advertising, Amazon Instant Video, iTunes, Google Play, Xbox Video and others’ pay-per-view and subscription revenues, too, some of which is subsidized by other, more profitable businesses.

That’s a cornucopia of overlapping revenue sources powering the creation of scripted entertainment and the acquisition / production of live sports: advertising, subscription, streaming, digital and physical retail, and other digital platforms. Investment in new platforms like streaming came while old platforms like syndication were still paying dividends. And with this new level of investment came different kinds of incentives.

For many channels, advertising no longer represented the majority of content revenue. Without having to satisfy advertisers, content creators could take on more risk with their stories. Streaming created additional library value for highly serialized stories (think 24) over episodic ones (think Law & Order) that typically ruled in syndication negotiations. The cable bundle became larger and more expensive fueling a fight for differentiation via content among a lot more players—HBO, Showtime, AMC, FX, USA, BET, Starz, ABC Family, etc.

As a result, we’ve had more pilots produced and picked up across a range of channels and products, more web series created, and an increase in budgets / cost per minute for the biggest sports and scripted spectacles from NFL games to Game of Thrones. Showrunners and talent have been given unprecedented control over the television product, mirroring the pattern of 1970s Hollywood film documented by Peter Biskind in Easy Riders, Raging Bulls. We’ve had greater diversity of product with higher production value all taking more risks than ever before in the pursuit of creative differentiation. If there were a creative equivalent of fracking, this would be it.

But what if all of that changed?


The human demand for narrative and fiction is limitless. However, over the next several years, each and every one of television’s revenue sources will transition, for good or for bad. These changes may or may not be kind to the talent or the fans of our New Golden Age of Television.

  1. Television advertising revenue may decline as time spent moves to digital platforms; at the same time, digital video advertising growth may not keep pace or may shift to non-traditional content types.
  • US broadcast and non-MSO cable advertising is broadly shaped by brand advertising dollars spent upfront chasing elusive 18-24 year-olds and moms to establish long-term spending patterns. 18-24 year-olds watch less than half as much TV as demographics above 55, so TV advertising has grown by increasing its rate card ($27 to $47 for primetime 30s) against declining audiences in these key demos and investing heavily in the few live events that can maintain broad viewership patterns. Meanwhile, YouTube delivers video advertising priced to be attractive regardless of content quality, and Facebook and others deliver video advertising to audiences without even worrying about needing pesky ol’ video content. See the YouTube report I wrote with the Jefferies & Co. Internet team here for more on this…
  1. Local affiliate and syndication revenue will very likely decline, and streaming revenue may not grow fast enough to replace it.
  • US local and MSO cable advertising is bought on spot price by hybrid brand / performance advertisers. Think car dealers, pharmaceuticals, old people scams, and during certain periods like now, politicians. This revenue comes back to content production in the form of local affiliate fees and syndication deals, but streaming, time shifting, and digital video all eat into the decidedly non-appointment viewing of Simpsons and Seinfeld reruns slapdashed into increasingly irrelevant 24-hour programming blocks. As supply shrinks, those advertisers may seek alternative, cheaper, more measurable buys…like YouTube.
  1. Cable subscription revenue to content producers (in the form of retrans and carriage fees) is at risk from plateauing subscriber growth, aforementioned youth TV trends, and alternative lower-priced streaming bundles.
  • It doesn’t seem like MSOs are in any danger of losing revenue from cord cutting, but Hollywood sure is. To the extent that MSOs ever have to focus on data uber alles, it will certainly readjust the retrans and carriage fee arrangement. And so too, assuming flat subscriber growth, how much more price elasticity is there before demand plummets? What happens if cable is unbundled? How does consumer choice in a la carte pricing influence what sort of content gets funded compared to now?
  1. A la carte streaming services such as Netflix and Amazon are obviously growing subscribers, but consumers are spending much less, even on multiple services, than cable.
  • One can subscribe to Netflix, Amazon Prime, and buy full seasons of 25 first-run shows via iTunes in HD for the cost of a $100/month cable subscription for a year. If it’s your predilection, you can replace those 25 first-run shows with 83 sporting events accompanied by beer and wings at Chili’s. So in this world, will cable revenue will be cannibalized? What happens to content quality if more people are paying closer to $7.99 per month than are paying $100+ per month for a cable bundle? Moreover, if Netflix or Amazon is the preeminent gatekeeper, what sort of content will be produced? The episode breaks of House of Cards remind me of an old Republic Pictures serial. I almost assumed like Chapter 3 of The Crimson Ghost that [spoiler alert] Kate Mara would be shown saving herself from the speeding train in the next episode after dying in the previous one.
  1. Finally, I think we all know that your used TV DVDs are on sale at Amoeba Records, and you’re waiting for the next season of your favorite show to come on Netflix instead of buying it on iTunes.
  • The now haggard DVD market supported a lot of interesting diverse content, especially one-season wonders (like Wonderfalls), which compelled content producers to take higher risk chances on those creators in the future (leading, for example, to Pushing Daisies). What happens to artist revenue when we choose to subscribe versus support it directly? If video follows music, the money flowing to content production will decline.

Look, I’m generally bullish on the outlook for scripted content, as a popular form of entertainment and as an art form. I would love to make something someday. And it could be that the economics of the revenue sources in the list above are unevenly distributed now yet reflect an actual and consistent economic demand to be redistributed as trends wax and wane. But three things keep occurring to me…

First, we won’t be able to pay $7.99 per month for Netflix plus scrape by with Amazon Prime and YouTube and be able to expect the same quality of content we experienced during this New Golden Age.

Second, we should be careful what we wish for when we ask to unbundle things. The economics may be distributed more “fairly,” but also we may end up unhappily watching “Where Are My Pants?” or “Ow, My Balls.” (Or maybe writing on our blogs to no one. I’m very interested in bundling, unbundling, and rebundling as a broader economic trend in our society, and I plan to write more on this soon.)

Finally, nothing lasts forever—no episode, no season, no series, no Golden Age. To quote the Hagakure once again: “It is said that what is called ‘the spirit of an age’ is something to which one cannot return. That this spirit gradually dissipates is due to the world’s coming to an end. For this reason, although one would like to change today’s world back to the spirit of one hundred years or more ago, it cannot be done. Thus it is important to make the best out of every generation.”

On that note, The Wire ended more than six years ago. It’s probably time for me to let it go. We can all start talking about the Golden Age of Netflix or YouTube now. Oh wait.

[1] Speaking of Athens and Zeppelin, I’ve heard the Golden Age of Athens looked a lot like the album cover “Houses of the Holy.” [insert rimshot]

[2]  I’ll tell the story of Steven Johnson explaining his thesis to a roomful of incredulous, fancy art patrons in the Berkshires another time.

Remembering Nepal: On Fear

PHOTO: On the Way to Yak Kharka

It’s terribly sad to read about the death of a number of guides and trekkers in the Annapurna region of Nepal. For those who have missed some of the coverage (see here and here), a sudden blizzard during the region’s busiest travel season blanketed the region’s highest, non-technical trails—Thorong La on the main route and Tilicho Lake, a popular, yet difficult, side trek to the south usually undertaken before crossing Thorong La. A survivor in one New York Times video describes embarking through the snow as the blizzard abated with the feeling that this moment was it, she was going to die.

The survivor’s tangible fear and thanksgiving reminded me of the feelings I had in April 2013 when trekking with my friend Rich near Tilicho Lake during a similar but far less unruly storm. Even if warned about the dangers of trekking in Nepal, these sorts of experiences are not what one expects when passing young and old alike along these popular trails. I can only compare that kind of fear and the suppression of it to stories of soldiers who prepare for battle by accepting that they are already dead. Or the Hagakure:

“Meditation on inevitable death should be performed daily. Every day, when one’s body and mind are at peace, one should meditate upon being ripped apart by arrows, rifles, spears, and swords, being carried away by surging waves, being thrown into the midst of a great fire, being struck by lightning, being shaken to death by a great earthquake, falling from thousand-foot cliffs, dying of disease or committing seppuku at the death of one’s master. And every day, without fail, one should consider himself as dead.

There is a saying of the elders’ that goes, “Step from under the eaves and you’re a dead man. Leave the gate and the enemy is waiting.” This is not a matter of being careful. It is to consider oneself as dead beforehand.”

Rich published a harrowing and amazing account of the storm that stranded us above 13,000 feet with more than a foot of snow and a trail wiped away—a trail already stupidly difficult in good conditions with a 1.5 mile landslide area. (A video of this trail on YouTube without snow is a good shorthand for the experience.) There’s no reason for me to rehash the whole story. However, one part of the tale that stays with me far more than each narrow step in the snow was the debate and decision to leave the teahouse where we were staying—especially when I hear the story of this survivor for whom sheltering in place meant life and moving forward would have likely meant death.

You might think in extreme situations that decisions will always be made quickly, with no time to spare and no time for regret. I think of film with its compressed view of time. Indiana Jones realizes a penitent man kneels before God moments before a blade would strike his head. Fiction doesn’t prepare you for the extreme situations that play out over a far longer “present.” The hours that morning before we left the teahouse were spent debating what I call the most vital of questions, “should we stay or should we go?”


In a state of insomnia caused by altitude and cold, I held the need to pee for as long as I possibly could. I wondered if the smell on my blankets wasn’t yak urine after all, as I steeled myself to go to the outhouse. It was probably before 5, and I stepped my flip flop into a foot of snow, still falling around me. Without glasses on or contacts in, the snow blurred more than ever, giant flakes glancing through the light of the headlamp. No one else was awake, and I wasn’t sure what to do. Again, trained by temporal emergencies in both real life and fiction, my first thought was, “Shit. We are going to be snowed in. Do I wake everyone else in the lodge up?” Otherwise known practically as “When in danger / When in doubt / Run in circles / Scream and shout!” At the very least I could wake up Rich to whom I remember saying upon my return, “Hey Rich, I think we kind of have a problem. Look outside.” He had been holding it as well.

We dressed and packed to prepare for the eventuality of leaving—again more slowly than you would think, deliberately, and with hope that others would begin to stir. Because your mind races in two different directions and constructs narratives: First, the story of how the group stayed together and survived, or second, the story of how you beat the odds by taking the landslide path less traveled. You assess whether the situation is getting more or less difficult to delay that decision… snow covers the path already, the snow collects more slowly, the lodge owners will have experience with this sort of thing, etc., all the while reflecting whether delay is its own type of wrong decision.

As others woke up, this internal debate became external. The young man with the most trekking and climbing experience gave voice to our inner debate. Right now you are safe. You are inside. You are, well, kind of warm. As soon as you move, you open yourself up to… death, injury, suffering. Like all of us inside, he wasn’t arguing for staying, but he would at least play devil’s advocate. A more urgent voice came from the one Nepali guide among us, an easterner far from the trails that Sherpas usually guide; he said only “Get down now,” as he ate breakfast with his two woman clients and prepared for the trek down. “Now” apparently still meant deliberate preparation: breakfast, packing, and time for regret. Often my regular day job had a greater sense of urgency.

Upon pulling a few more words out of the Sherpa, we learned that the danger now was getting down in any reasonable period—days, weeks. More snow meant more chance of avalanche along the landslide area. A cessation of the storm at this time of year meant quick melting, again causing avalanches. And once melted, the dangerous landslide area would be that much more unwieldy. We had a window of time—small in the big scheme of things, infinite at this moment eating a fried egg.

Once on the trail, we would learn how right he was—how the snow congealed the pebbles into a walkable mass, how the snow gave some confidence in one’s ability to arrest if falling in a way the sheer rock face never did, how we might only break bones if the worst came to pass.

Once on the trail, staggering along the landslide area five or ten minutes behind the Sherpa, we only had to step into the footprints in front of us, slowly, deliberately, and constantly. To quote another movie, “Movement is life.”

Once on the trail, we said little except to organize ourselves, and I thought of nothing but the present. The internal and external debate was over, and any fear turned into action, like alchemy. You know what to do, to borrow the excellent advice of Rich: Just don’t die.

Sadly, what was the right decision for us then may not have been the right decision for the guides and trekkers crossing Thorong La last week. I pray for them and the Nepalis for whom we are guests and who take their duty as hosts, the virtue of Xenia, seriously. But when I think of fear, I still won’t think of making that hike down. I will think of that time of debate and deliberation before moving forward, and then, to the Hagakure again:

When one has made a decision to kill a person, even if it will be very difficult to succeed by advancing straight ahead, it will not do to think about doing it in a long, roundabout way. One’s heart may slacken, he may miss his chance, and by and large there will be no success. The Way of the Samurai is one of immediacy, and it is best to dash in headlong.

Plants vs. Zombies

Despite working in digital media forever, and most recently mobile video, I didn’t buy an iPhone until this year. I stuck with the most embarrassing of all phones—the Blackberry—for long after I could justify the merit of the one feature I loved: a physical keyboard.  Unless I met with a particular kind of invective, I was generally not wracked with shame in my supposed Luddism whenever I placed my phone on the table for a meeting. At least it broke the ice. Far more embarrassing was pulling out my Blackberry at a show or a party to take a photo. In San Francisco, it was akin to leprosy. So much for radical inclusion.

For me, the physical keyboard was an effective tool for writing quickly in less than ideal circumstances—under the table, in my pocket, while driving, after a night of heavy drinking (why does this sound kinkier than it is?). And anyway, doesn’t a real man hail a cab, call their dates, and put their phone away at dinner? In ignoring the iPhone, was I a dinosaur? Was I no different from the aging writer in 1990 who clung to his typewriter as a terribly heavy and ineffective life preserver while the good ship Word Perfect motored on past? We aren’t all going to be Tom Robbins with his lovely Remington SL-3.

My iPhone is better, but I miss having the keyboard for what the keys represent: production of ideas, not consumption of others. Freedom, not servitude. The apps lined up on my iPhone remind me of the hospital scene in Mike Judge’s Idiocracy, based on McDonald’s cash registers, where a nurse can push one key for a broken arm and another for a severed head. Virtual keyboards suggest words that I don’t intend to use, and difficult though it may be to believe, I do think those words are dumber than the ones I planned to write myself. And all these app stores are walled gardens, yet still filled with weeds, and not the wonderful kind that fight back against zombies. (My favorite is the Starfruit! Cute and vicious.)

The relationship between society and technology runs one way and then back again, as James Boyle discussed in Shamans, Software, and Spleens.  We create technology in an implicitly codified world as Larry Lessig wrote in Code, but influence goes back and forth—as we use technology, we reshape our perspective, explicitly and implicitly. Isn’t it best though to limit the number of things that change our perspective without us knowing? There’s a good chance that we’ll opt with our reptilian brains for that which is easy or wrong, because it’s easy, or because it’s popular. What happens when the one tool in our hand all the time is great for reading but not so great for writing? What happens when our wide webby world is guarded by an Apple gatekeeper? To harken back to another post, thoughtlessness can lead to dehumanization. Lack of attention can lead to less moral choices.

Okay, yeah, I’ve gone way too far again in my polemic. Heck, I used Uber myself for the first time today, and the virtual keyboards are getting better. Nevertheless, here’s a vote in favor of more wild growing brambly plants, like the blackberry, and fewer zombies!

P.S. At some point, I’d like to discuss one excellent book that deals with this subject, The Future of the Internet by Jonathan Zittrain, while also contrasting it with two that I feel miss the mark, You Are Not a Gadget by Jaron Lanier (who seems to have good philosophical intentions) and To Save Everything: Click Here by Evgeny Morozov (who strikes one as reasonable for thirty pages before he decides to settle every academic grudge he’s ever had for the next three hundred).

P.P.S. Upon some googling, it seems like Morozov has gotten himself into some hot water recently and will have additional academic grudges to settle. Eep.

James Murphy and Craft

ILLUSTRATION: Twerking Hippo • ink on paper + photoshop • 5×7″

Some people read the biographies of great men and women for inspiration, but when I’ve read those same stories, I’ve rarely said to myself, “This is the person I want to be. This person’s life is inspirational.” Humanitarians? Scientists? Do-Gooders? I’m self-aware enough to see I’m more similar to Joel McCrea in Sullivan’s Travels—there are many ways to change the world. Successful businesspeople? The Man currently occasions self-loathing and fear of corporate sociopathy. Politicians? Next. Academics? Sorry. (At the bottom of the heap might be Founders. After many years in the trenches, I’m careful to praise anyone for their irrational ideas until I understand their “ethics”—moral and work. More on this another time.)

Perhaps celebrity worship isn’t conducive to a well-rounded life; you’re following a poorly marked out path without the same lucky breaks to a false idol (and some Old Testament judgment). I’m more inclined to seek guidance and solace in the wisdom of fiction or visit my fascination upon the virtues of people I know in the here and now. That I might be fascinated by the virtue of someone who reads the biographies of great men and women starts the whole damn cycle over again.

Sometimes I feel like even following a recipe or plan is a limitation on my own agency and development. I’m going to make chocolate chip cookies, and I’m going make them my way! Don’t tell me what to do Mr. Cookie Recipe Guy. Cue some not-so-good cookies plus a host of other life mistakes, a smaller bank account than I should have, and crippling self-doubt.  There is apparent value to staying on the path and reading LifeHacker my friends.

Yet the more I learn about the musician James Murphy, former member of LCD Soundsystem, the more I feel that he merits some consideration as a source of inspiration. I’m not alone. “For a small group of people—mostly young, mostly men—LCD was the naughts,” Nick Sylvester, a music critic, told The Atlantic. “The songs described very specific emotional realities about becoming an adult and attempting to be a decent human being.”  Perhaps that’s why his influence infiltrated a wedding speech I gave as an officiant followed by my friend choosing “All My Friends” as their reception entry song. “That’s how it starts,” after all.

Yes, the LCD oeuvre is beautiful and chock full of angsty wisdom one needs as a young man, but it’s James Murphy’s approach to his artwork, literally his artistic labor, that I find to be inspirational: working in the music trenches and playing actual instruments (!) long before a big break, developing an encyclopedic knowledge of the rare curiosities of popular music, sitting through new records of bands he used to like to learn something, agreeing to random art projects (or coffee roasting) just because they’re interesting or just so they will exist, and yes, creating the mother of all drops in “Dance Yrself Clean.”

That sort of dedication to craft, not luck, inspires me to stay dedicated to whatever work is most important and interesting in my life, whether it’s this blog, my art, or even a spreadsheet. As Sylvester writes in the liner notes to “Shut Up and Play the Hits,” with LCD James was “negotiating a way to live, love, and make art honestly. Artists aren’t the only ones looking for a way around the minefield of cliché.” The act of craft can be it’s own reward. Of course, Sylvester also writes that perhaps LCD had to end because “we should all know better than to get our answers from rock stars.”

Wait, does that mean I have to start my search for inspiration again?

Tonight (10/16/14) James Murphy DJs at Public Works in San Francisco. His set in November in 2012 was the single best DJ set I’ve ever experienced live. Hope to see you there finding your own answers.

In a non-government context, privacy in the US is governed generally by narrowly defined common law torts plus very specific privacy regulations (COPPA, HIPAA, VPPA for example). Torts are utilitarian constructions—protecting property rights, the public good, etc. As such, privacy torts are more characterized by preventing economic harm through illegal or inappropriate actions (spying, blackmail, etc.) than the pure protection of personal information. If you’ve ever been angry when technology CEOs claim privacy is dead (and get over it!), it might be better to refocus one’s anger on the fact that privacy in a market context was never alive to begin with.

Let’s talk first about what’s protected by torts and why. William Prosser is helpful here. He argues that the legal right to privacy, as advocated by Warren and Brandeis, could be better categorized as lesser torts, and as such, lesser values. By combining these torts into an overbroad definition of privacy, Warren and Brandeis created something potentially destructive of the public good. Instead, Prosser classifies privacy torts into four categories, distinct from any abstract; they are: intrusion into private affairs, public disclosure of embarrassing private facts, placing the plaintiff in a false light, and appropriation of someone’s likeness. [1] These torts require that the individual be in a state of self-enforced seclusion, a state given up in almost every public interaction from going to the grocery store to shopping online.

In other words, privacy torts best protect individuals who have a monetary interest in controlling information (say, a celebrity) and who keep to themselves (say, private island). By regulating only the economic loss of information disclosure or the mental anguish caused by an act of surveillance as opposed to the act of information collection itself, privacy torts provide a kind of absolution for non-invasive surveillance. And in a strange reversal of Constitutional Law, you seem to have a greater right to privacy as a public individual whose information is valuable than as a private individual whose information is marginally interesting. Hooray for the 1%.

From Prosser’s almost reductionist argument for privacy, it’s easy to make a utilitarian or economic case against data protection. Richard Posner argues that a right to privacy allocates property rights in an economically deficient manner.[2] Personal information is most valuable in aggregate to a direct marketer and value-less to any given individual.[3] With overbroad data protection, the market would suffer because of poorly allocated property rights given the substantial bargaining costs with each individual.[4] In addition, individuals already show a “general reticence” to disclosing discrediting information, information that might be valuable to businesses or the public.[5] (Are you cool with wearing that FICO credit score and STD test lanyard to work? No? Why are you interfering with the efficient allocation of property rights, man?) All joking aside, one can see how this line of thought leads to little standard protection of personal information in a non-government context. It also illustrates very simply why there’s a) no initial limit on Internet data collection and b) little to no collective action or market incentive to prevent it. The default is zero.

In the next post in the series, we will examine how Warren and Brandeis influenced rights-based thinking and Constitutional Law.

[1] See William Prosser, “Privacy: A Legal Analysis,” Philosophical Dimensions of Privacy, ed. Ferdinand Schoeman (Cambridge: Cambridge University Press, 1984), 107. Originally published in California Law Review 48 (1960): 338-423. The reductionist philosophy can also be seen within Judith Jarvis Thompson’s writings. See Judith Jarvis Thompson, “The Right to Privacy,” Philosophical Dimensions of Privacy, ed. Ferdinand Schoeman (Cambridge: Cambridge University Press, 1984), 286-287. Originally published in Philosophy & Public Affiairs 4 (1975): 295-314.

[2] See Richard A. Posner, “An Economic Theory of Privacy,” in Philosophical Dimensions of Privacy ed. Ferdinand Schoeman (Cambridge: Cambridge University Press, 1984), 332-345. Originally published Richard A. Posner, “An Economic Theory of Privacy,” Regulation, May/June 1978, 19-26. Page numbers refer to the Schoeman anthology. See also Lessig, Code, 159-163. Oddly enough, Lessig too concludes by favoring a market-based approach to personal information, but he argues that all property rights should be vested in the individual. In that way, an individual can have as much or as little privacy as he or she may like, and property rights are used as a means to protecting privacy. I think this is indefensible and will provide a framework for my argument later.

[3] Ibid., 336-337.

[4] Ibid., 336-337.

[5] Ibid., 338.