Junk at scale vs. quality in proportion

SF Weekly recently published an in-depth look at the Bleacher Report, a sports-centric site whose content is populated almost entirely by its readers. As the article notes, it “[tapped] the oceanic labor pool of thousands of unpaid sports fanatics typing on thousands of keyboards.” The site is user-generated content taken to its logical extreme, for good and bad. The good being the scale of coverage; the bad, the poorly written content.

But now it’s gone pro, hired real writers and editors, and been polished up — and the “lowest-common-denominator crap,” editor King Kaufman says, has been gussied up. The site is now owned by Turner Broadcasting, which snapped it up this summer for a couple hundred mil. Not bad for a site that was built on the backs on unpaid superfans.

I’m not interested in the Bleacher Report per se, but I am interested in the idea that nowadays, crap at scale matters less than quality in proportion, because it’s part of a larger trend sparked by disparate forces in the evolution of the Internet. They’ve come together to wipe away a short-lived business model that called for garbage content that ranked well in search but left the user unfulfilled. This model’s most prominent proponent was Demand Media (and its sites, among which are eHow and Livestrong), but certainly the Bleacher Report qualifies too.

The article does a good job explaining how Bleacher Report (and Demand) initially found so much success — basically, by cheating search engines:

Reverse-engineering content to fit a pre-written headline is a Bleacher Report staple. Methodically crafting a data-driven, SEO-friendly headline and then filling in whatever words justify it has been a smashing success.

The piece also touches on the larger context of the shift from what it calls “legacy media” to the current landscape:

After denigrating and downplaying the influence of the Internet for decades, many legacy media outlets now find themselves outmaneuvered by defter and web-savvier entities like Bleacher Report, a young company engineered to conquer the Internet. In the days of yore, professional media outlets enjoyed a monopoly on information. Trained editors and writers served as gatekeepers deciding what stories people would read, and the system thrived on massive influxes of advertising dollars. That era has gone, and the Internet has flipped the script. In one sense, readers have never had it so good — the glut of material on the web translates into more access to great writing than any prior era. The trick is sifting through the crap to find it. Most mainstream media outlets are unable or unwilling to compete with a site like Bleacher Report, which floods the web with inexpensive user-generated content. They continue to wither while Bleacher Report amasses readers and advertisers alike.

But that being the case, we’re now entering a brand-new era, one that will attempt to combine the scale and optimization of the new guys with the polish of the old. And we’re seeing the end of the SEO-engineered-dreck model for three reasons:

1. The rise of social media as currency
2. Google’s Panda algorithm change
3. Advertiser interest

1. The rise of social media as currency
Used to be, back in the aughts, when you were looking for (for example) a podiatrist, you’d Google “podiatrist 10017.” You’d get pages and pages of results; you’d sift through them and cross-reference them to your insurance provider, then go to the doctor, discover he had a terrible bedside manner, and decide you’d rather keep your darn ingrown toenail. Nowadays, your first move would probably be to ask your friends on Facebook or Twitter, “Anyone in NYC have a recommendation for a good podiatrist who takes Blue Cross?” And you’d get a curated response from a dependable source (or even a few of them).

Plainly, social media users endorse people, products and articles that are meaningful. You’d never tweet, “Great analysis of how to treat an ingrown toenail on eHow” (at least not unironically). But you might recommend an article from Fast Company on the latest from ZocDoc.

There will always be a place for search — it’s one of the main entryways into any news or information site, and that’s not going to change anytime soon — but good quality content from a trustworthy source is becoming increasingly valuable again.

2. Google’s Panda algorithm change
In early 2011, Google changed its algorithm in an update it called Panda. This meant that, broadly speaking, better content ranked higher in Google’s results. Its advice to publishers regarding SEO was basically, “Create good content and we’ll find it.”

No longer could Demand Media’s and Bleacher Report’s search-engine-spamming formula win them page views. In fact, Demand Media completely retooled itself in response, saying that “some user-generated content will be removed from eHow, while other content will run through an editing and fact-checking process before being re-posted.”

In other words, quality started to matter to users, who let Google know it, and Google responded accordingly. The result was a sea change from how it had been done, leading to a completely new business model for Demand and its ilk.

3. Advertiser interest
Advertisers have long shunned poor quality content. From the beginning, they almost never wanted placements on comment pages, which can feature all-caps rants, political extremism at its worst and altogether unsavory sentiments (which is why many news sites feature comments separately — you thought that tab or link to comments on a separate page was a UX choice? Hardly). The SF Weekly article quotes Bleacher Report’s Kaufman, who says of its transformation to better quality stuff, “This was not a decision made by the CEO, who got tired of his friends saying at parties, ‘Boy, Bleacher Report is terrible.’ Bleacher Report reached a point where it couldn’t make the next level of deal, where whatever company says ‘We’re not putting our logo next to yours because you’re publishing crap.’ Okay, that’s the market speaking.”

So it is. A longer story for another time, but neither advertisers nor publishers are getting a lot of bang out of banner ads, CPMs and click-through rates. Increasingly, the least you can do to appeal to the market, if you’re a publisher, is create good content. How to do it without breaking your budget and while devising new technologies, maintaining your legacy product and operations, and appealing to readers…well, if I knew the answer to that, I’d be a rich woman.

Meantime, even though “critics from traditional journalistic outlets continue to knock Bleacher Report as a dystopian wasteland where increasingly attention-challenged readers slog through troughs of half-cooked word-gruel, inexpertly mixed by novice chefs,” they’re making money like you wouldn’t believe. They don’t break stories, they own them (the same is true of the Huffington Post).

Time for the “legacy” to embrace the future.

Read More

Narrative Science and the Future of StoryTelling

kris hammond narrative science

On Friday I had the good fortune to attend the Future of StoryTelling conference. Among the leaders and luminaries in attendance (whose names I will not drop here) was Dr. Kris Hammond, who is the CTO at Narrative Science, which has created an artificial intelligence product called Quill that transforms data into stories (the product generates a story every 28 seconds, per Hammond). I’ve written about Narrative Science before, and I argued in that post that Narrative Science “is not a threat, it’s a tool, and it fills a need.”

Now that I’ve met Dr. Hammond and heard him speak, I’m more a believer than ever that this is the future of journalism — and not just journalism, but all of media, education, healthcare, pharmaceutical, finance, on and on. Most folks at FoST seemed to be open to his message (it’s hard to disagree that translating big data into understandable stories probably is the future of storytelling, or at least part of it). But Hammond did admit that since the Wired story came out in which he was quoted as saying that in 15 years, 95 percent of news will be written by machines, most journos have approached him with pitchforks in hand.

I went in thinking that the two-year-old Narrative Science went hand-in-hand with Patch and Journatic in the automated-and-hyperlocal space, but I now think that Hammond’s goals, separate from these other companies, are grander and potentially more landscape-altering.

I know I sound like a fangurl, but I was truly that impressed with his vision for what his product can be, and what it will mean to the future of journalism. No, it can’t pick up the phone and call a source. It can’t interview a bystander. It can’t write a mood piece…yet. But they’re working on it.

With that, my top 10 quotes of the day from Dr. Hammond:

The first question we ask is not “What’s the data,” it’s “What’s the story?” Our first conversation with anyone doesn’t involve technology. Our first conversation starts, “What do you need to know, who needs to know it and how do they wanted it presented to them?”

Our journalists start with a story and drive back into the data, not drive forward into the data.

We have a machine that will look at a lot and bring it down to a little.

The technology affords a genuinely personal story.

It’s hard, as a business, to crack the nut of local. For example, Patch doesn’t have the data, but they’re the distribution channel. There’s what the technology affords and what the business affords…. We don’t want to be in the publication business.

Meta-journalists’ [his staff is one-third journalists and two-thirds programmers] job is to look at a situation, and map a constellation of possibilities. If we don’t understand it, we pull in domain experts.

The world of big data is a world that’s dying for good analysis. We will always have journalists and data analysts. What we’re doing is, we’re taking a skill set that we have tremendous respect for and expanding it into a whole new world.

The overall effort is to try to humanize the machine, but not to the point where it’s super-creepy. We will decide at some point that there’s data we have that we won’t use.

Bias at scale is a danger.

The government commitment to transparency falls short because only well-trained data journalists can make something of the data. I see our role as making it for everybody…. Let’s go beyond data transparency to insight transparency. It can’t be done at the data level, it can’t be done at the visualization level, it has to be done at the story level.

Read More

The differences between print and online publishing

I’ve spent the past month helping edit a book. A real, old-timey, printed-pages book, with big photos and tons of words. While it has been an all-consuming grind to move the thing from words on a screen to designed layout to perfected page, creating a book also opened my eyes even further to a handful of differences between the print and online worlds of publishing. I suppose I knew these differences abstractly — after all, I’ve worked in the print publishing world for a more than a decade and I’ve written about some of these variations before — but living the book-publishing life instead of the online-publishing one for a month solid has put these five distinctions into stark relief.

1. Standardized technology
Practically the entire print world (magazines as well) uses Adobe’s Creative Suite. If you’re a publisher, you’re using InDesign, Photoshop and Illustrator, period. Occasionally there are major disruptions —  when the industry moved from QuarkXPress to InDesign around the turn of the century, for example, after having been Quark-centric for the previous half-dozen years. If a stranger wandered in off the street to a prepress shop or printer, they’d see InDesign being used. If a college kid majors in graphic design, she’d better be taught to use Illustrator. If you’re a photographer or retoucher, Photoshop is your go-to.

Compare this to the completely opposite world of online publishing. There’s not a standard content management system that every publisher uses. Open-source platforms like WordPress and Drupal are huge and growing — they’re being selected as the go-to CMSes more every day — but they’re not widespread enough to be called a standard, at least not the way InDesign is for print publishers. More often, each Internet publishing site has its own, homegrown, cobbled together, Frankenstein half-solution, which works well enough to connect A to B, but just barely, and it is not a complete solution in the way that Adobe Creative Suite has been for print.

There’s also no standard photo-editing app: Photoshop is one option for online photo editing, but so are Pixlr, Aviary, Gimp, on and on. Even Facebook and Twitter — not to mention Instagram — offer online photo editing.

In fact, Internet publishing reminds me of nothing more than print in the 1980s and 1990s. Computers were being introduced and used to some degree for word processing, but there was no single software system for print publishing. We’d moved well beyond copy boys, news alerts coming across actual wires and traditional typesetting, but the “technology” that most publishers used then included paste-ups and X-acto knives (or some version thereof). We’re living the equivalent now online. Will the Internet standardize to a single CMS? Will there be a turnkey solution invented that takes online publishing from primordial to fully evolved?

2. Established process and workflow
The printed word carries with it an established process, one that has been more or less the way things have worked since Gutenberg. First you write the words, then you edit them, then you publish them. This is true still in print publishing. Broadly: brainstorm, assign, write, edit (line edit, fact-check, copyedit), design, prep, print, and then distribute completed, unalterable product. There are often many rounds of each of these steps, and distribution can be a months-long process. But a process it is, and one that carries a fixed order and a good degree of finality.

Online publishing, on the other hand, usurps this process from end to end; the online workflow is not fixed. Anyone can devise her own ideas and then write them. They needn’t be edited nor fact-checked, but even if they are, many people and even organizations publish first and edit later, and then republish. This doesn’t actually disrupt the distribution process a bit, because the piece is a living document that can always be changed. The immediate distribution means that readers can also respond immediately, and they do, via comments and social media, and this often precipitates yet another round of reediting and republishing.

Compare the reactions of print versus online outlets to the publishing scandal of the summer: Jonah Lehrer’s making up of quotes and self-plagiarization. His book publisher, Houghton, had to “halt shipment of physical copies of the book and [take] the e-book off the market,” as well as offer refunds to readers who purchased copies of the book. Presumably, they will actually fact-check the book sometime, then issue a new version in a new print run sometime before…who knows when.

Lehrer’s online publishers, on the other hand, merely republished his pieces with an “Editor’s Note” appended that they “regret the duplication of material” (NewYorker.com) or a  “notice indicating some work by this author has been found to fall outside our editorial standards” (Wired.com).

I haven’t discussed the cost-as-expectation factor because I want to limit this post to my observances on technology and workflow as an industry insider, but I do wonder whether, because the Internet is free, the standards are lower for both process and product. Regardless, it’s clear that making corrections as you go along isn’t possible with a printed product once it’s been distributed.

I also think that because the Internet is not only a publishing business but is also a technology business in a way that print is not, editors are cribbing from technologists’ desire to embrace iterative methodologies and workflows, such as Agile (in relief to Waterfall) — more on this below.

3. Clearly defined roles and responsibilities
Hand in hand with the process itself are the people who conduct the process. Print, having been around for centuries, has evolved to the point where jobs are delineated. It can be stated generally that in the world of print, photographers shoot pictures and photo editors select among these pictures. Designers marry text and art. Copy editors edit copy. Printers print. Managing editors meet deadlines, collaborating with all parties to get things where they need to be when they need to be there. There’s no such delineation in the online publishing world. Editors in chief shoot photos and video; copy editors crop art; writers publish. Everyone does a little bit of everything: It’s slapdash, it’s uncivilized, it’s unevolved.

I think that soon this madness will organize itself into more clearly defined roles, or else we’ll all burn out, go crazy and move to yurts in the middle of Idaho. This is happening already in small degrees in online newsrooms, and it’s starting to reach into online publishing broadly, but I have to believe that the insanity will decrease and the explicit definition of roles will advance as we sort out how it all fits together.

4. Focused, respectful meetings
It caught me off guard to realize that something as simple as speaking to coworkers is very different in the print versus online worlds, but the meetings I had when I was working on the book were a far cry from those I’ve had when I was working online. They were focused, with little posturing, corporate speak, agenda pushing or bureaucracy. At no point did anyone say, “Let’s take that offline” (translation: “Shut up”). At no point did I wonder, “Are you answering email or IMing the person across the table right now instead of paying attention to what I’m saying?” It’s pretty simple: No (or few) laptops and lots of respect for others and their abilities.

Technology likes to put labels onto concepts that publishing has been using for decades. For example, Agile has concepts like “stand-ups” and “Scrum.” Print has been having these sorts of as-needed-basis check-ins as long as it’s been around — it’s called “talking to your coworkers,” and it works quite well as a method of communication and dissemination of information. For all that’s going against it, print succeeds on a human level; technologists are playing catch-up in this respect. Whether this is because most technologists are men or most technologists are introverts I’m not sure, but the cultural and human-interaction differences are clear. If online publishing did a little more in the way of focused and respectful meetings — or maybe even fewer organized meetings and more on-the-fly collaboration — I think the industry would reap major benefits.

5. Frequency of disruption by and importance placed on email and social media
When I was head’s-down editing on paper for this book, and when I was on the computer editing, devising schedules or creating task lists, I didn’t check email, Facebook, Twitter, or really any other website except during lunch. Turns out, this behavior is fairly easy to do when you’re not working on a website yourself. I’ll admit that I felt a little out of the loop on the latest stupid thing Mitt Romney said. I missed the uproar about, next-day recap of, and explanatory cultural essay regarding Honey Boo-Boo. But I didn’t actually feel less engaged with the world. Having been completely engaged in the task at hand, I felt like the focused energy I was able to pour into the book benefited the work and my own sense of accomplishment.

When I work online I often end days thinking, “What did I actually do today? Meetings, emails, checking social media…now the day is over, and what do I have to show for it?” Quite distinctly, when I ended days on the book, I could say with conviction that what I had worked on mattered. I moved whatever I was working on from one state to the next, and I improved it when it was in my hands. It was a welcome departure.


The book will be in stores a few months. And I’m about to press “publish” on this post, which will then be live and available to anyone with an Internet connection the moment after I do. All of which serves as the starkest reminder yet about the benefits of, drawbacks surrounding and often chasm-like differences between each medium. Unlike print, for online publishing the history is being written as its being lived, and I feel privileged to be a witness to it.

Read More

Content farming and its runoff

Content farms, or scaled content creators, have generally gotten a bad name in journalism. I know because when I worked for one — AOL Huffington Post’s Seed before it got shuttered in February — I got a lot of guff from traditional journalists. The line was that we paid writers — sometimes “writers” — a pittance to create crappy content. In truth, that did and does happen, especially at Demand Media (which creates content for eHow and Livestrong, among others sites) and other, low-quality, high-search-volume sites and site scrapers.

At Seed, we strove to find a middle ground between Demand’s formula and a slightly higher quality, slightly more expensive, hopefully higher ranking and better referring schema. This formula was experimental; I felt like oftentimes I worked at a journalism lab, where, just as a scientist might test a theory, we’d hypothesize, try, react, tweak, recast, and reattempt, repeatedly, until we had a winning formula.

I always thought of About.com as the proto-content farm, Demand as the next step (forward or backward I was never sure), and Seed as the next evolution.

Coincidentally, today brings news about both of these companies. And the lastest scoop reveals that both are in jeopardy, for reasons having to do with search rankings and algorithm changes, quality (reality and perception), user behavior changes, the rise of social media, and the evolution of the Internet at large.

About.com, which is owned by the New York Times Co. (this fact always lent it an air of ethics that the rest of its peers never shared) is being sold to Answers.com. According to Peter Kafka at All Things D, when the Times Co. bought it in 2005, it was for $410 million. It’s selling it today for $270 million.

Demand Media, according to Jeff Bercovici at Forbes, claims a profit for the quarter. Ahem. I guess $94,000 is a profit. For a publicly traded company that had loss of $2.4 million at this time last year, maybe that counts. But overall, I think we can say definitively at this point that the Internet is trending away from low-quality garbage and toward actually helpful articles — maybe even some that are well written enough that the user may delight in them and desire to share them.

Both companies would certainly benefit from not having to be so reliant on Google’s indiscriminate algorithm changes. Demand has already spiked millions of pieces of crappy content and improved others (presumably those it can win on in search) to curry favor with rankings and users. About.com, I think, due to its nature and structure, may have reached saturation, which isn’t to say that what’s already there isn’t of value — on the contrary. But the Internet is not a meritocracy, and having content that’s good doesn’t automatically mean it’s valuable monetarily.

For both companies, there’s nothing to do but evolve along with the web, take it where the Internet leads, try to keep up with the bruising pace of change, and respond accordingly. In other words, test theories, tweak them and try again, as we did at Seed, and hope that the company is patient enough (and/or its pockets are deep enough) that you come out on the other side with heads held high and a profit to show for blazing the trail. Whether that can actually happen with content farms (or algorithmic solutions to similar situations) remains to be seen.

UPDATE: About.com was sold to Barry Diller’s IAC, the company that owns Ask.com, for $300 million.

Read More

Journatic and the future of local news

This week’s This American Life featured a segment on Journatic, a hyperlocal, scaled-content creator that’s apparently replacing local reporters in many markets. I’ve previously written about hyperlocal news and the value of using algorithms in news creation, so the story was of great interest to me.

My argument with hyperlocal is that no one has yet figured out how to do it right. It sounds to me like Journatic is finding some success, but it’s also failing in important ways. My defense of algorithms is mostly to do with the company Narrative Science, which as I said is “not a threat, it’s a tool, and it fills a need.” That need is basically the scut work of news reporting, and although the folks there are working on this very issue, for now, “It’s a tool that does a programmatic task, but not a contextual one, as well as a human.”

Journatic aims to solve the hyperlocal problem with the algorithmic solution. The company scrapes databases of all kinds, then uses that data to “report” on local bowling scores, trash pickup times, where the cheapest gas is, and who has died recently. The company does this by using algorithms to mine and sort public information, and there’s nothing necessarily wrong with that.

When it launched, Journatic-populated site BlockShopper was basically a real-estate listings site based on publicly available data. Using public records, it would “report,” for example, that “123 Main St. is in foreclosure.” But since then, the algorithms and tools have gotten smarter. Soon it was able to say a home was in foreclosure “by the bank” and also add that it “is up for auction on March 31.” The site is now so smart that it actually feels almost invasive. To wit:

The real estate information contained in the article is publicly available, from the names of the people involved in the transaction to the price paid to the location details. The fascinating thing, and what pushes it into a brave new frontier of journalism and privacy invasion, though, is that the information on the professions of the involved is also publicly available (probably via LinkedIn). Arguably, all the article is doing is presenting public data in a new format. The difference is access and availability. In the pre-Internet days, there was no way to know public information except to go to the city records office and look, and there was really no way to know about peoples’ professions except to know them or ask them. These tasks required interested and motivated parties (such as journalists), because actually going places and talking to people requires on-the-ground reporting (not to mention complicit consent). This is not the sort of work Journatic traffics in. That’s not a criticism, necessarily, just a fact: There used to be barriers to the information; now there aren’t; Journatic uses this lack of barriers plus its algorithms to surface the data.

 

Journatic aims to solve the hyperlocal problem with the algorithmic solution.

 

At first, the company didn’t do any (or much) writing or analysis. According to This American Life and its whistle-blower, though, the company now pays non-native-English-speakers in the Philippines between $.35 and $.40 a story to try to add a bit of context to the data. Thirty-five to forty cents! However shady this is, though, it is not necessarily unethical. It’s capitalistic, and it’s pretty shameful, and it feels wrong somehow, but it’s not unethical journalistically.

Where it does get unethical is when readers are misled, and that has apparently occurred. They force these writers in the Philippines to use fake bylines like “Amy Anderson,” “Jimmy Finkel” and any number of fake bylines with the last name “Andrews,” in order to Americanize them and dupe readers, according to the show. This is flat-out wrong, and I think Journatic knew it — they apparently reversed their stance on this after the story aired.

But ethics aside, and journalism in broader context here, Journatic’s founder, Brian Timpone, claims that the “single reporter model” doesn’t work anymore. The Chicago Tribune, one of Journatic’s customers, says that it’s gotten three times more content for a lot less money. These are serious issues for the future of the profession (along with the opportunity for privacy invasion and privacy mishandling that all this unfiltered data presents). It’s no doubt true that the Trib paid less money for more content versus hiring local reporters. But what is the quality of the work? I think we all know the answer. Shouldn’t that be a bigger factor than it is? If you’re just turning out junk, your brand gets diluted, and your readers soon abandon you altogether.

It’s easy to criticize, but it seems to me that Timpone is trying, as we all are, to devise a way forward. That’s admirable, in its way. It’s a little scary, and the desire for progress sometimes makes us color outside of the lines, and when that happens, places like This American Life need to be there as a regulator, as has just happened. We’re all still muddling our way through the ever-changing new online media landscape, and we will test theories and make mistakes and learn lessons, and with any luck we will end up with a better product, one that serves readers first, last and always. I hope someone is able to someday crack the code of good news done quickly at good quality for a good wage. Until then, we must keep trying.

Read More

The GM-Facebook showdown

Two posts on the GM-Facebook face-off. Similar thoughts (and similar to those I’ve voiced before), but the lesson is that brands need to up their content game to appeal to users in new ways and meet consumers where they are. The technology (and marketing philosophy around same) seems to be evolving faster than brands can strategize, but brands must engage users — via the users’ rules — if they want to succeed.

“When brands focus more of their resources on creating compelling digital content—things that people care about sharing—they’ll be able to reach the audiences they’re after.” via

“Advertisers need to think about new end-to-end experiences that inspire and engage a far more connected and discerning audience.” via

Ultimately Facebook is a revolution, and that’s bigger than one brand. As I’ve said before, I wouldn’t root against ’em.

Read More

The content is the product

Thought-provoking piece from Andy Rutledge:

Online publishing is largely broken because media outlets are built to seek profit not from their product, but rather from the distractions and obstacles they conspire to place between the customer and the product. It’s a strategy that destroys quality, destroys confidence, and destroys the product consumption experience. It’s irrationality on parade: publications set up to destroy the very things they are supposed to deliver. It should come as no surprise that such a product tends to sell poorly.

Digital publishers don’t need a cleverer and more elaborate ad strategy. Digital publishers need a value and UX strategy for their product.

Read More

A simple solution

Forbes’ Lewis DVorkin attempts to drop some wisdom with his nine “requirements for a sustainable model for journalism.” His opening words are powerful:

“FORBES and the entire media industry face daunting challenges. Digital publishing is perhaps the most disruptive force the media has ever encountered. Anyone can publish anywhere, anytime and attract an audience. Questions loom about the future of print in a tablet world. As downward pressure on CPMs indicate, new kinds of digital ad products are required. Journalists must learn entirely new skills or risk being run over by a competitive force of native digital content creators. News organizations need to develop new labor models (our contributor network is one) that can produce quality content efficiently. Most scary of all, news stalwarts must recognize that brands are publishers, too, and they want the media to provide new solutions for them to reach their customers.”

Yessir!

But then his nine simple tips come into play. We need to create quality content. I agree! Journalists need to engage with their readers. Yes, I think that’s smart. The things we write and products we create need to be usable and efficient and at scale and…wait, what? All of that, all at the same time? Hardly.

Who would disagree that a journalistic business (or any other, for that matter) should strive for a quality product from an authentic source who efficiently creates content via usable platforms and is also, simultaneously, profitable? What advertiser would not like to create “premium products that enhance, rather than disrupt, emerging consumer experiences” to win audiences and sell their stuff? No one, that’s who!

But the reality is that it’s really, really, really hard to actually do all those things. Really.

I don’t know if it is, as one commenter says, “the ‘do more with less’ pixie dust mantra that executives who don’t have a specific answer like to use,” because I want to be more positive than that. But DVorkin’s statement that “Scalable content-creation networks and open-source publishing tools that have been highly customized can drive the timely output of quality content” makes me go

Spock's eyebrow, http://27.media.tumblr.com/tumblr_lkl5gl1EYl1qil7l3o1_400.gif

As I mentioned in a previous post, the ground is always shifting, and none of us has the answers. We theorize, test and iterate. With any luck, people earn a living wage to experiment with how to create content that others find compelling, and to somehow monetize it. But expecting this business to be all these things — efficient, engaged, supremely usable, scalable, transparent, authentic and profitable — all at the same time, when the reality reshuffles itself every three to six months and all of us are merely guessing at the industry’s next steps, is a very high bar indeed.

DVorkin has at least cobbled together some theories. It’s a start. He is, like we all are, trying, throwing stuff at walls and seeing if it sticks, building the plane in midair. Maybe he thinks quality and quantity can live harmoniously together — my experience has not borne that out. Perhaps he really does believe that efficient can also be engaged — I’ve not seen that happen without either burnout or, at minimum, tears.

But at least he’s out there doing it: Theorizing, testing, iterating.

Read More

Keeping up with users

Two very interesting pieces, and made more interesting when juxtaposed. One is a fascinating look back at Technology Review’s app-creation process and attendant drama. The other is about how those annoying Social Reader apps, after a moment in the sun, are being shunned by users.

The thesis of both seems to be that brands are stumbling in the dark to understand user/reader behavior. And just when they think they’ve found the light, after spending hundreds of thousands — if not millions — of dollars, users look, shrug and move on.

From Jason Pontin, the EIC and publisher of Technology Review:

Absurdly, many publishers ended up producing six different versions of their editorial product: a print publication, a conventional digital replica for Web browsers and proprietary software, a digital replica for landscape viewing on tablets, something that was not quite a digital replica for portrait viewing on tablets, a kind of hack for smart phones, and ordinary HTML pages for their websites. Software development of apps was much harder than publishers had anticipated, because they had hired Web developers who knew technologies like HTML, CSS, and JavaScript. Publishers were astonished to learn that iPad apps were real, if small, applications, mostly written in a language called Objective C, which no one in their WebDev departments knew. Publishers reacted by outsourcing app development, which was expensive, time-consuming, and unbudgeted.

The ground of the Internet is constantly shifting, and brand and businesses have to keep up. It’s very expensive, frustrating and often fruitless to try, but keep up one must.

No one really knows the answers. No one really knows why some apps are successful and others aren’t. Or why communities spring up or fall away. Why sites run hot then cold. Engagement, sure. Great user experience, yes. Brand loyalty. Easy tools. Peer motivation. Curiosity. The urge to be heard. Bragging rights. Belonging. Good deals. FOMO, especially with social.

Like magazines before them, sites and apps, and programming languages, and CMSes, and devices (and on and on) heat up, run hot…but then — poof! Gone. Or at least diminished.

Truly, no one knows. Many people have theories, but that’s all they are, because this technology stuff is brand-new. But it’s important to note that it’s not a waste of time to theorize, build upon that theory (aka experiment), test it and learn from it. As a matter of fact, that’s all we can do: Learn, adapt and with any luck succeed.

Read More

Digital journalism quote roundup

From Madrid, the Paley Center’s international council of media executives edition…

Google’s head of news products and Google+ programming, Richard Gingras, on using data for good:

“This is a renaissance of media and journalism…computational journalism can amount to the reinvention of the reporter’s notebook.”

Facebook’s journalism manager, Vadim Lavrusik, on the value of context in content:

“People want analysis from journalists. [FB] posts with journalists’ analysis receive 20 percent more referral clicks.”

“Media companies have approached it from ‘we need to chase more eyeballs, we need to create more content.’ So journalists who created a few articles in one week are now doing that in one day. But content isn’t scarce — it’s the contextualisation and making sense of that content that’s becoming scarce.”

FT.com Managing Director Rob Grimshaw on social media distribution:

“We have to engage with social media [but] not all distribution is good distribution.”

WSJ Europe deputy editor Neil McIntosh on editorial curation:

“Our readers need us to sift. Readers are often crying out for less, not more. They’re still looking for the nut graf and the sort of stories I was taught to bash out 20 years ago.”

Read More