RIP, EveryBlock

EveryBlock, a hyperlocal news start-up that used data to filter neighborhood news and spark discussion, has been shut down by its corporate overlord. Apparently, NBC News acquired it last year (which itself was news to me) but couldn’t find the business model to continue operating the site. That’s a common tale among hyperlocal news sites, but it still stings when one closes down.

It’s too bad — I think it had more going for it than many similarly themed sites — and its founder, Adrian Holovaty, seems shocked that the site has met its end. When he sold the site last year, he was proud of its success and confident in its future:

“EveryBlock users have used our service to accomplish amazing things in their neighborhoods: starting farmers markets, catching flashers, raising money for their community, finding/reporting lost pets…and generally getting to know their neighbors and forging community bonds. These days, something like this happens on the site nearly every day — which casual onlookers might not notice because of our long-tail, neighborhood-specific focus. EveryBlock has become a force for good, and it’s got a bright future.”

Sigh. I suppose it’s not particularly interesting that a start-up failed to locate a business strategy or that it didn’t “pivot” quickly enough to “disrupt” via its “MVP.” What is interesting about this case is that the site was a news-centric one that really challenged newsgathering tactics, asked questions about the use and display of public data and, in its small way, wrought lessons for the [cue horror-movie scream] Future of Journalism. It began, after all, as a recipient of a Knight Foundation grant.

Even more interesting is that it evolved so much over its short life (actually, wait, is six years long or short in technology?). When it began, it was just one news-tech guy’s realization that news should not be story-centric but instead should be gathered as structured data. He married the programmer’s philosophy of the separation of content and presentation with the journalist’s instincts for ever-better storytelling. Holovaty’s blog post from September 2006 is, in retrospect, both amusing and prescient. In it, he calls for parsing data and creating CMSes that support content types other than words, two notions that are laughably obvious six years later.

(On the flip side, also laughable are the mention of PDAs and the idea that tagging was “trendy.”)

Holovaty turned those 2006 idea germs into EveryBlock’s mapping and reporting functionality and, ultimately, he created a robust community around neighborhood news. The site put forth a notion of what the oft-dreaded Future of Journalism could be, or one version of it, anyway. It tried something new. It experimented. And the experiment did yield results; unfortunately, the conclusion was that this model might not be quite right.

In its sad and clearly hasty post today confirming the shutdown news, EveryBlock seems to acknowledge that it was a victim of the unforgiving pace of change in the online journalism industry:

“It’s no secret that the news industry is in the midst of a massive change. Within the world of neighborhood news there’s an exciting pace of innovation yet increasing challenges to building a profitable business. Though EveryBlock has been able to build an engaged community over the years, we’re faced with the decision to wrap things up.”

In short: “We tried. We’d like to keep trying, but trying doesn’t pay the bills.” And that’s too bad.

 

Read More

Leave curation alone

A nice quick hit from CJR‘s Steven Rosenbaum today in favor of curation, which has somehow become a bad word in journalistic circles — or at least a misunderstood one.

Information overload drives content consumers to look for human-filtered, journalist-vetted, intellectually-related material. This hunger for coherence isn’t unreasonable; it’s essential.

Even in the days before information overload, contextual links to other interesting sites and articles were the norm. Now it seems that unless it’s part of a “strategic partnership” or is otherwise monetized, stories on the web are less about helping the user by providing useful context. This concept, among others, is well explored by Anil Dash is his post “The Web We Lost“:

Ten years ago, you could allow people to post links on your site, or to show a list of links which were driving inbound traffic to your site. Because Google hadn’t yet broadly introduced AdWords and AdSense, links weren’t about generating revenue, they were just a tool for expression or editorializing. The web was an interesting and different place before links got monetized, but by 2007 it was clear that Google had changed the web forever, and for the worse, by corrupting links.

As Dash points out, “This isn’t our web today.” I maintain that if startup founders and VCs funded solutions to the problems faced by media, instead of the latest location-based social check-in app or redundant e-commerce site, we could find solutions to help rebuild the industry.

Read More

Are print-to-digital apps ruinous for media?

As I mentioned in a previous post, many of my recent freelance gigs have involved reading printed materials on various electronic devices. For several distinct projects, I read the same material on no fewer than four devices at a time, and each had a different layout, different size, different coding language and different interactive elements. This was the case because Apple, Amazon and the rest render their materials in different, proprietary programming languages, and the hardware they’ve created boasts proprietary specs. It has been a major shock to learn how much work and money must to go into optimizing the same printed material for all these devices. And it’s abundantly clear that as publishing professionals, we must do much more work, and soon, in establishing standards for print-to-digital conversion.

“Technology is always destroying jobs and always creating jobs, but in recent years the destruction has been happening faster than the creation.”
—Erik Brynjolfsson, an economist and director of the M.I.T. Center for Digital Business (via)

Arguably, this obtuse process is employing me. The technology has, in this case, created a new job: There’s a need for someone to read each article of each issue (or each page of each chapter of each book) on each device. I don’t want to sound ungrateful, because I’m developing quite a little niche for myself as an expert on print-to-digital conversions. But I wonder how long it can last, considering that print media is undergoing huge change at the moment. Momentous, disruptive, industry-wide change that’s happening at a rapid pace, particularly with regard to technology.

We might be powerhouse publishers, but in the tech world we’re just like every other Joe App Maker, 96 percent of whom do not make significant money on their apps. According to a recent article in the New York Times, 25 percent of Apple game app makers made less than $200, with only 4 percent making upwards of $1 million. Granted, random game app makers don’t have the brand recognition or cachet of major publishing houses; neither do they have an overarching, Apple-endorsed app that features their stuff (Newsstand for Apple, if you’re still following me).

But make no mistake, the field has been leveled, and instead of competing only with each other, even the biggest content publishers now also compete with Angry Birds, Twitter, Facebook, travel apps, e-commerce apps, dining apps, coupon apps…the list is endless.

The difference? Unlike many apps, the media’s brand relevance and reputation absolutely hinges on an amazing user experience across devices at all times. In short, it has to be perfect. And in order for that to happen, the same material must be reconceived by its creators multiple times. It seems impossible to believe, but publishers optimize the same product over and over again, incurring all sorts of real costs from designers, editors, producers and programmers with each iteration. (And this isn’t even counting the web producers who conceive it all over again for the online version!) Once you account for these costs, in addition to the so-called legacy costs of creating the print product in the first place, it hardly makes sense even to enter into the realm of app creation for many print products. That’s even if you can get your app sponsored or otherwise monetized, and even if you use Adobe to help you create it.

I realize that the common line of thought is that, like websites, if you don’t have an app presence, you don’t exist. Half a decade ago, this principle propelled the creation of a million new half-assed websites (websites: another print-distribution model without a standard!). But I’d counter that without apps — without content — these devices would be useless. So unless we want to bankrupt the already struggling print media industry further, we must stop playing by the device makers’ rules and rewrite them to benefit our business. We must invent technology that adapts our product (ie, content) to any device at any orientation. We must create or help market forces create a standard we can implement and follow; we must negotiate a better rate than giving away 30 percent of our revenue; we must not “throw in” digital access with print subscriptions.

I know, I know: Nature abhors a vacuum. If we don’t follow suit, we’re nothing. But following hardware makers blindly down dark passageways as our pockets get picked around every corner isn’t a smart strategy, either. In one big way, we are not like Joe App Maker: We possess a hugely powerful medium. We must harness our strengths and lead ourselves forward. A nice start might be to begin taking a stand against having to endlessly tinker with every article in every issue of every magazine, every book, every design.

As Shawn Grimes, the app developer profiled by the Times said: “People used to expect companies to take care of them. Now you’re in charge of your own destiny, for better or worse.” Let’s be in charge of our own destiny.

Related: Read my post about the best e-reading devices.

Read More

The best e-reading devices, as determined by a control group of one

Many of my recent freelance gigs have involved reading printed materials on various electronic devices, so I’ve basically become a one-woman control group for determining the best device-reading experience. I’ve had the opportunity to directly compare the following devices: Kindle E-Ink, Third-Generation Kindle (“Keyboard Kindle”), Fourth Generation Kindle, Kindle Fire, Kindle Fire HD, Samsung Galaxy, iPad 2 and iPad 3.

Ready for the results? The winner is…the iPad 3 with retina display!

The result is perhaps not surprising, but the gap in performance and readability among all of these devices versus the iPad 3 really is shocking. The iPad 3, in addition to being a more more sleek and elegant experience overall for the user, is also far, far easier to read. The display is better than even the original printed product to which I was comparing it, believe it or not. The words are clearer and crisper; the photos are deeper and livelier.

When evaluating tablets, we must start with the premise that every six months a new one is released, and that the newer versions are superior to the previous generations. That leaves truly valid comparisons, at the moment, between only the iPad Mini, the iPad 3 and the Kindle Fire HD. Setting aside the iPad Mini for the moment because it doesn’t (for some stupid reason) yet have retina display, that leaves the latter two. Perhaps to casual users, the gap between the iPad 3 and the Kindle Fire HD isn’t noticeable, but having spent many weeks putting down one device and picking up the other, I can tell you with certainty that the Apple product blows the Amazon one out of the water.

I acknowledge that I am an Apple person. I have an iMac, an iPad 2 and an iPhone, and when I had a Droid phone for about six weeks last year, I wanted to throw it out the window. (Except Swype. I love Swype! Why doesn’t Apple have Swype?!) So for me, the Apple experience — gestures that just seem to make sense, buttons where they should be, seamless navigation among apps, access to hundreds of thousands of other amazing and useful apps — in addition to the reading experience put the device in a field of its own.

Is the difference in quality worth $200 ($499 for iPad 3 versus $299 for Kindle), especially if you aren’t already living the Apple lifestyle? It depends what you want to use it for and how much weight you want to tote around town, but for my money, even if — or maybe especially if — you only use it to read books and magazines, the retina display is such a game changer that I absolutely think so.

Separately from work, I recently test-drove a Microsoft Surface briefly, and my initial thoughts were that it might be nice if you already live in the Windows universe — native Outlook and Excel apps, for example — but it really doesn’t do anything better than the iPad does. And that includes the weird add-on cover keyboards, which are either nontactile (in other words, useless versus the virtual) or just small enough compared to a normal keyboard as to be aggravating. (And this is coming from someone who loathes Apple’s virtual keyboard.)

I’ve also had the opportunity to play with the seven-inch Nexus, which has a nice hand-feel and is extremely portable. I don’t think this makes up for its lack of sensible navigation or access to trusted apps, but it’s an OK alternative to the real game-changing device, which will be the next-generation iPad Mini, with retina display. (True story: I’ve never even laid eyes on a real-life Nook.)

It’s a safe bet that when the iPad Mini with retina display — small enough to feel good in the hands and fit in the bag, but with the text clarity of the iPad 3 — comes to market, I’ll be first in line.

Related: Read my post about why print-to-digital conversion is more difficult and more expensive than it should be.

Read More

“What happened to The Daily?” quote roundup

The Daily, News Corp.’s general-interest iPad news product, shut down this week. Media experts (or perhaps I should say “observers”—I’m not sure the media has any experts anymore) disagree on the specific reasons it failed, but they do seem to agree that it was doomed. The columns I’ve read and rounded up from around the web cite the following three conclusions:

1. Making it available only via iPad and without access to the open social web (readers couldn’t share links) made it a walled garden.

“The Daily’s device-bound nature limited its potential…. Locking into a single platform and not having a web front door limiting sharing and social promotion.” —Joshua Benton

“Publishing for a single platform, whether print, web, or the iPad, is a foolish move, and I think we knew that before The Daily was excised from News Corp.’s balance sheet.” —Ben Jackson

“The product, its content and the conversation around it should have been porous, able to flow in and out of social media platforms and be informed by them. Content should have been unlocked, and made available to subscribers on all platforms.” —Jordan Kurzweil

“More than 54 million people in the U.S. use an iPad at least once a month, but they remain just 16.8% of the population and 22.2% of people on the internet, according to eMarketer. That put a hard cap on the number of subscribers The Daily could acquire no matter how solid its product.” —Nat Ives

2. It was overburdened with staff—despite already laying off a third of the staff over the summer—and and a “legacy” (ie, print) org structure

“Simply put, The Daily never attracted the revenue required to support a team of 120 people. Launching what amounted to a digital daily newspaper with many of the legacy costs and structures of print wasn’t the best idea.” —Hamish McKenzie

“The Daily should have been run like a startup, a digital business, not a division within a division in a corporation.” —Jordan Kurzweil

3. It wasn’t interesting content (apparently! I never read it…see No. 1)

“Though it looked quite nice and its content was competent, that content was all-in-all just news and news is a commodity available for free in many other places.” —Jeff Jarvis

“[The term general reader means] a media executive is imagining himself and his friends (you know, normal guys) and intending to produce a bundle of content for that hyperspecific DC-to-Boston-went-to-a-good-college-polo-shirts-and-grilling demographic…. This is not to say that media properties cannot be built with the goal of reaching the mainstream [but successful] sites have been built up like sedimentary rock from a bunch of smaller microaudiences. Layers of audience stack on top one another to reach high up the trafficometer.” —Alexis Magrigal

Whatever the reasons it was closed down, I’m glad someone at least experimented with new ways to produce news. Trying stuff really is the only way to learn. My condolences to those journalists who were laid off. They should consider the no doubt multitude of lessons they’ve learned and call themselves, rather than out-of-work journos, technicians in the lab of digital journalism — scientists who can take the knowledge they’ve gleaned and apply it to the next experiment.

Read More

The media’s Dust Bowl

It’s human nature to compare things. We put things in context for better understanding. “This thing [business/weather/process/person/event] that is happening is like this other thing that happened, and that thing turned out [good/bad/different/better/worse].”

I’ve been doing a lot of that lately surrounding the media. Specifically, I’ve spent time contemplating how to reconcile how valuable journalism is to society compared to how much actual monetary value it generates. As I’ve written about before, no one knows what’s going to happen to this business: whether it will go the way of the steamship and the telegraph, reinvent itself a la Apple, or something in between.

I’m not the only observer who’s searching for an appropriate comparison from the past in order to predict the media’s future, but I do find that some insights are better than others; does anyone really think that the envelope business, of all things, is really a good model for the Random House-Penguin merger? (Does anyone think of “the envelope business” at all?)

Watching the Ken Burns PBS documentary The Dust Bowl recently, however, opened my eyes to a new analogy for the media of the present day: farming a century ago. (And why not — we did recently learn that there are far more software app engineers than farmers.) According to Burns, farmers in the Great Plains around 100 years ago sold their goods, wheat in particular, in enough volume and at a fair enough price, that they kept their families fed, happy and productive before the Great Depression. Prior to the big event, they faced periodic yet persistent droughts and occasional technological breakthroughs (gas-powered plowing, for example). But year after year, they found a way to keep going, even increasing volume to make up for the deficits caused by off years. That is, until the permanently landscape-altering Dust Bowl.

Compare this to journalists and media today. For decades we plied our trade, not making big money but making enough to support our families. We changed with the times, moving from copy boys and paste-ups to computers. But the past decade has seen such a huge acceleration of technology (and a hugely inverse deceleration of jobs) that our worth is now, to put it mildly, in question. Like the farmers, we’ve tried doing more: You’re now not only a reporter, you’re also a videographer, photographer and blogger — and you will hereafter be known as a “content creator.” You’re now responsible for not only reporting your usual one-story-by-deadline allotment, but you’re also going to write six additional posts a day (and you need to know how to produce them, tag them and upload them).

But as the farmers discovered, doing more not only didn’t help them, it actually created its own set of problems. In their case, they unknowingly caused the largest man-made ecological disaster to date (you’re well on your way, though, global climate change: hang in there). In ours, the huge volume of posts was churned through by disloyal consumers, the glut and pace belittled the value of the news, and the business changed from creating newsworthy, relevant content to attracting eyeballs and lowering bounce rates and counting click-throughs and measuring social engagement and Tweeting viral videos.

Other, larger factors were also at play, including the rapid pace of technological development. The ease of use of technology meant that anyone could be a creator of content — so the process of journalism was democratized, but it was also dumbed down and its worth devalued.

“But of all our losses, the most distressing is our loss of self-respect. How can we feel that our work has any dignity or importance when the world places so low a value on the products of our toil?”

Caroline Henderson, Oklahoma farmer during the 1932 drought during the Depression, just prior to the Dust Bowl’s worst

Now, I’m not saying it’s a perfect comparison. We haven’t had to put to pasture cattle that suffocated during “black blizzards” or bury children who caught “dust pneumonia.” But I think it’s a decent metaphor, because the media is going through its version of the Dust Bowl. Newspapers and magazines are closing up shop at an unprecedented pace; media businesses are losing money quarter after quarter and year after year, with no end in sight; those workers who are able (and I count myself among this number) are learning new skills and moving into new areas. (All of this can be said for other industries as well, by the way, particularly music.)

Somewhat brazenly, and I think disrespectfully, we’ve taken to calling tech and business shakeups, events and new models “disruptions.” Of course, since the beginning of time businesses have striven to disrupt other, existing businesses, but it seems much more ruthless to start your business with the sole intent of creating wreckage. I think it’s fair to cast our historical eye onto the Depression and the Dust Bowl and deem them disruptions, at the very least. And it’s easy to forget, but disruptions have a cost — a monetary one and a human one.

Years from now, I wondered while watching the documentary, how will journalism be perceived? Who will be the talking heads and what will they say? Which commentators will highlight which historical implications that, in retrospect, seem clear? How will the people generations from now — even one or two — talk about the media? Will we have adapted with the times and made a new reality for ourselves (and somehow have figured out a way to feed our families along the way)? Is journalism like the family farm in the Oklahoma panhandle of the 1930s, and are we farmers, continuing to plow the fields that we’ve yet to learn will never again yield crops? Is it like kerosene lighting, steam-powered train engines, millinery, fax machines, answering services, 8-tracks, the luncheonette, and the endless list of other businesses throughout history that litter the shoulders of the road toward the future? I want to believe that it’s not. I hope upon hope that it’s not.

“Hope kept them going, but hope also meant that they were being constantly disappointed.”

—Pamela Riney-Kehrberg, Dust Bowl historian

Read More

Nobody knows anything

Nobody knows anything.

I’ve suspected for a while that no one really knows what they’re doing, what’s next, what’s going on, what the plan is (“What’s the plan, Phil?” –Claire Dunphy). As I age and gain experience, I’m starting to realize the truth of it all: Everything is slapdash. Everything is last-minute. Everything is barely hanging on. Everyone is making it up as they go along and crossing their fingers.

At the highest levels of government, the military and business, it’s all perilously close to nonfunctional. (And often it is nonfunctional, not to mention dysfunctional — a distinction.) So why should the media — even the upper echelons of the media — be any different? It’s not.

Nobody knows anything.

This thought crystallized in my mind earlier this week when I attended a tech start-up job fair Monday, an all-day start-up conference Tuesday and a Meetup called “Content Conversations” Tuesday night.

The resulting emotion from this string of events was one of deep malaise. I’d gone in thinking I’d get some perspective and advice from job creators and also hear some inspiring start-up success stories. As it turns out, the companies who were hiring were seeking programmers and UX designers, not journalists (or even, as we’ve come to be known post-Internet, “content creators”). And the panelists the following day, those who were alleged successes, had very little practical advice for the attendees. Sure, there were platitudes expressed by these supposed luminaries: Stay true to yourself. Find your voice. Put the user first.

But nothing said was really actionable. Now, going in I expected tech start-up founders to speak variously in jargon and dude-speak; it’s their MO. However, I wanted more from the content-focused discussions and panelists. Unfortunately they, too, had only vague advice in terms of the future of content on the web, what’s next for those of us who create content, and how brands can use content to sell their products.

I left the conference to attend the Meetup, which was a Q&A with Noah Rosenberg, the founder and editor of Narrative.ly. He seems like a nice fella, and I agree with his thesis that the Internet’s short bursts of information are starting to zap our brains. He’s trying to remedy that with what he terms slow journalism — long-reads stuff focused on a weekly theme. But he’s paying his contributors for their many-thousands-of-words pieces not in dollars but in exposure, mostly. He regrets that he can’t pay them what they’re worth, and when I asked how he thought the Internet could help create high-quality content while providing a living wage for content creators, he said, “That’s the million-dollar question” and “There’s no magic bullet.” So no answers there, either.

I left feeling dejected and resigned. But I awoke the next morning with a realization: Nobody knows anything. No one was able to provide answers to the information I was seeking — all day long — because no one knows. Not high-ranking people, not low-ranking people. Not CEOs, CTOs, CMOs or interns. No one!

Nobody knows anything because we are in a time of extreme transition. That’s not a new or original thought, even for myself. But sometimes you have a moment when a mere notion is made real. You go from knowing it to knowing it. For me, that was this experience. I saw for myself, hands-on and up close, that in times of transition the story cannot be told, because no one knows how it turns out. You have to live it, day by endless day, until you’re on the other side. And even then, you don’t really know for sure that you’ve reached the other side until much later.

Just as you couldn’t tell that the disappearing shoals under your shoes fomented a destructive deluge that would make you question your survival, so too are you unsure, once you’ve grabbed onto a branch and tenderly climbed onto the opposite bank, that you’re truly safe.

That is the unfortunate state of the media today: We’re in the rapids, hanging on for dear life and praying. (Which I would not deem a strategy, exactly.) The media — news, advertising, marketing, TV, movies, print, online, creation, distribution — and those of us who practice it are evolving, and nobody knows what will happen. And I’m not upset about it; I’m ready to join in and try things, experiment and help in the effort of making it up as we go along.

And I don’t think anyone else has a better idea how to navigate these waters, because nobody knows anything.

Read More

Just say no to reading comments

A few choice quotes from Salon’s Mary Elizabeth Williams about why you should never read the comments on your own pieces — or ever, really. Needless to say, I agree.

I used to believe that as an online writer, I had an obligation to read the comments. I thought that it was important from a fact-checking perspective, that it somehow would help me grow as a writer. What I’ve learned is that if there’s something wrong or important or even, sometimes, good about a story, someone will let you know.

I want it to be better. But it’s just not.

[Not reading comments has] calmed the negative chatter in my head and it’s made my experience of the Internet a whole lot healthier. I highly recommend it.

Talk about (as I often do) the differences between print and online! This is one of the bigger ones, in terms of psychic drain if nothing else. I don’t know how it got this bad, but it did. Perhaps it’s a reflection of the general (lack of) discourse in the public and political arenas nowadays. Perhaps the technology has made it permissible. Perhaps I’m just sensitive. In any case, my self-protective instincts, like Williams’s, just make me want to disengage completely.

I feel about Internet comments roughly the same way I’ve started to feel about television news, with its know-nothing talking heads and lowest-common-denominator coverage made for an attention span–less public that’s apparently eager to share their opinions (about which I care very little). They’re both icky and make me feel bad, angry and frustrated.

Two recent and related stories about others who are taking the opposite stance from the “just walk away” model and are actively trying to make the Internet better:

Good luck to them — to us all.

Read More

Junk at scale vs. quality in proportion

SF Weekly recently published an in-depth look at the Bleacher Report, a sports-centric site whose content is populated almost entirely by its readers. As the article notes, it “[tapped] the oceanic labor pool of thousands of unpaid sports fanatics typing on thousands of keyboards.” The site is user-generated content taken to its logical extreme, for good and bad. The good being the scale of coverage; the bad, the poorly written content.

But now it’s gone pro, hired real writers and editors, and been polished up — and the “lowest-common-denominator crap,” editor King Kaufman says, has been gussied up. The site is now owned by Turner Broadcasting, which snapped it up this summer for a couple hundred mil. Not bad for a site that was built on the backs on unpaid superfans.

I’m not interested in the Bleacher Report per se, but I am interested in the idea that nowadays, crap at scale matters less than quality in proportion, because it’s part of a larger trend sparked by disparate forces in the evolution of the Internet. They’ve come together to wipe away a short-lived business model that called for garbage content that ranked well in search but left the user unfulfilled. This model’s most prominent proponent was Demand Media (and its sites, among which are eHow and Livestrong), but certainly the Bleacher Report qualifies too.

The article does a good job explaining how Bleacher Report (and Demand) initially found so much success — basically, by cheating search engines:

Reverse-engineering content to fit a pre-written headline is a Bleacher Report staple. Methodically crafting a data-driven, SEO-friendly headline and then filling in whatever words justify it has been a smashing success.

The piece also touches on the larger context of the shift from what it calls “legacy media” to the current landscape:

After denigrating and downplaying the influence of the Internet for decades, many legacy media outlets now find themselves outmaneuvered by defter and web-savvier entities like Bleacher Report, a young company engineered to conquer the Internet. In the days of yore, professional media outlets enjoyed a monopoly on information. Trained editors and writers served as gatekeepers deciding what stories people would read, and the system thrived on massive influxes of advertising dollars. That era has gone, and the Internet has flipped the script. In one sense, readers have never had it so good — the glut of material on the web translates into more access to great writing than any prior era. The trick is sifting through the crap to find it. Most mainstream media outlets are unable or unwilling to compete with a site like Bleacher Report, which floods the web with inexpensive user-generated content. They continue to wither while Bleacher Report amasses readers and advertisers alike.

But that being the case, we’re now entering a brand-new era, one that will attempt to combine the scale and optimization of the new guys with the polish of the old. And we’re seeing the end of the SEO-engineered-dreck model for three reasons:

1. The rise of social media as currency
2. Google’s Panda algorithm change
3. Advertiser interest

1. The rise of social media as currency
Used to be, back in the aughts, when you were looking for (for example) a podiatrist, you’d Google “podiatrist 10017.” You’d get pages and pages of results; you’d sift through them and cross-reference them to your insurance provider, then go to the doctor, discover he had a terrible bedside manner, and decide you’d rather keep your darn ingrown toenail. Nowadays, your first move would probably be to ask your friends on Facebook or Twitter, “Anyone in NYC have a recommendation for a good podiatrist who takes Blue Cross?” And you’d get a curated response from a dependable source (or even a few of them).

Plainly, social media users endorse people, products and articles that are meaningful. You’d never tweet, “Great analysis of how to treat an ingrown toenail on eHow” (at least not unironically). But you might recommend an article from Fast Company on the latest from ZocDoc.

There will always be a place for search — it’s one of the main entryways into any news or information site, and that’s not going to change anytime soon — but good quality content from a trustworthy source is becoming increasingly valuable again.

2. Google’s Panda algorithm change
In early 2011, Google changed its algorithm in an update it called Panda. This meant that, broadly speaking, better content ranked higher in Google’s results. Its advice to publishers regarding SEO was basically, “Create good content and we’ll find it.”

No longer could Demand Media’s and Bleacher Report’s search-engine-spamming formula win them page views. In fact, Demand Media completely retooled itself in response, saying that “some user-generated content will be removed from eHow, while other content will run through an editing and fact-checking process before being re-posted.”

In other words, quality started to matter to users, who let Google know it, and Google responded accordingly. The result was a sea change from how it had been done, leading to a completely new business model for Demand and its ilk.

3. Advertiser interest
Advertisers have long shunned poor quality content. From the beginning, they almost never wanted placements on comment pages, which can feature all-caps rants, political extremism at its worst and altogether unsavory sentiments (which is why many news sites feature comments separately — you thought that tab or link to comments on a separate page was a UX choice? Hardly). The SF Weekly article quotes Bleacher Report’s Kaufman, who says of its transformation to better quality stuff, “This was not a decision made by the CEO, who got tired of his friends saying at parties, ‘Boy, Bleacher Report is terrible.’ Bleacher Report reached a point where it couldn’t make the next level of deal, where whatever company says ‘We’re not putting our logo next to yours because you’re publishing crap.’ Okay, that’s the market speaking.”

So it is. A longer story for another time, but neither advertisers nor publishers are getting a lot of bang out of banner ads, CPMs and click-through rates. Increasingly, the least you can do to appeal to the market, if you’re a publisher, is create good content. How to do it without breaking your budget and while devising new technologies, maintaining your legacy product and operations, and appealing to readers…well, if I knew the answer to that, I’d be a rich woman.

Meantime, even though “critics from traditional journalistic outlets continue to knock Bleacher Report as a dystopian wasteland where increasingly attention-challenged readers slog through troughs of half-cooked word-gruel, inexpertly mixed by novice chefs,” they’re making money like you wouldn’t believe. They don’t break stories, they own them (the same is true of the Huffington Post).

Time for the “legacy” to embrace the future.

Read More

Narrative Science and the Future of StoryTelling

kris hammond narrative science

On Friday I had the good fortune to attend the Future of StoryTelling conference. Among the leaders and luminaries in attendance (whose names I will not drop here) was Dr. Kris Hammond, who is the CTO at Narrative Science, which has created an artificial intelligence product called Quill that transforms data into stories (the product generates a story every 28 seconds, per Hammond). I’ve written about Narrative Science before, and I argued in that post that Narrative Science “is not a threat, it’s a tool, and it fills a need.”

Now that I’ve met Dr. Hammond and heard him speak, I’m more a believer than ever that this is the future of journalism — and not just journalism, but all of media, education, healthcare, pharmaceutical, finance, on and on. Most folks at FoST seemed to be open to his message (it’s hard to disagree that translating big data into understandable stories probably is the future of storytelling, or at least part of it). But Hammond did admit that since the Wired story came out in which he was quoted as saying that in 15 years, 95 percent of news will be written by machines, most journos have approached him with pitchforks in hand.

I went in thinking that the two-year-old Narrative Science went hand-in-hand with Patch and Journatic in the automated-and-hyperlocal space, but I now think that Hammond’s goals, separate from these other companies, are grander and potentially more landscape-altering.

I know I sound like a fangurl, but I was truly that impressed with his vision for what his product can be, and what it will mean to the future of journalism. No, it can’t pick up the phone and call a source. It can’t interview a bystander. It can’t write a mood piece…yet. But they’re working on it.

With that, my top 10 quotes of the day from Dr. Hammond:

The first question we ask is not “What’s the data,” it’s “What’s the story?” Our first conversation with anyone doesn’t involve technology. Our first conversation starts, “What do you need to know, who needs to know it and how do they wanted it presented to them?”

Our journalists start with a story and drive back into the data, not drive forward into the data.

We have a machine that will look at a lot and bring it down to a little.

The technology affords a genuinely personal story.

It’s hard, as a business, to crack the nut of local. For example, Patch doesn’t have the data, but they’re the distribution channel. There’s what the technology affords and what the business affords…. We don’t want to be in the publication business.

Meta-journalists’ [his staff is one-third journalists and two-thirds programmers] job is to look at a situation, and map a constellation of possibilities. If we don’t understand it, we pull in domain experts.

The world of big data is a world that’s dying for good analysis. We will always have journalists and data analysts. What we’re doing is, we’re taking a skill set that we have tremendous respect for and expanding it into a whole new world.

The overall effort is to try to humanize the machine, but not to the point where it’s super-creepy. We will decide at some point that there’s data we have that we won’t use.

Bias at scale is a danger.

The government commitment to transparency falls short because only well-trained data journalists can make something of the data. I see our role as making it for everybody…. Let’s go beyond data transparency to insight transparency. It can’t be done at the data level, it can’t be done at the visualization level, it has to be done at the story level.

Read More