Redeye VC

Josh Kopelman

Managing Director of First Round Capital.

espite being coastally challenged (currently living in Philadelphia), Josh has been an active entrepreneur and investor in the Internet industry since its commercialization. In 1992, while he was a student at the Wharton School of the University of Pennsylvania, Josh co-founded Infonautics Corporation – an Internet information company. In 1996, Infonautics went public on the NASDAQ stock exchange.

Read more or visit First Round Capital

Monthly Archives for 2010

View the older monthly archives »

Feed Frenzy

1929488434_5cca933099 Back in the early 90's, I co-founded a company called Infonautics that ran an online service called Homework Helper.  It operated on Prodigy and AOL -- a few years before the development of the web browser.  We ran our own data center (the Rackspaces of the world didn't exist) and staffed our network operations team 24x7.

I remember being amazed by the network operator's job.  Given the complexity of the system, the network operator would receive dozens of emails per hour, informing him of the status of the various systems and components.  Emails with subjects like "Server load at 87%" and "Query queue at 43" or "Warning: Disk space on Server43 at 95%".  Most of those emails didn't require action, but the network operator had to review them all in order to find the important ones.  In a typical day, I'd guess the netops desk received 2,000 - 4,000 email updates.

By the time I left Infonautics in 1998, the system had evolved.  Instead of the systems reporting to a human via email, we adopted a SNMP dashboard.  This was a piece of software that automatically received (and sometimes acted on) data from the different systems (such as "free memory", "system name", "number of running processes", "default route").  And this level of reporting (and the ability to act on it) eliminated the need for a night-shift network operator.

Fast forward a decade to 2008.

Over the last six months, it seems like every web site is adopting the notion of a "News Feed".  These feeds keep me informed about the status/actions of all my friends and relationships.  I have a Facebook News Feed.  I have a Twitter Feed.  I have a LinkedIn Feed.  And even more recently, a new category of products called Feed Aggregators have arrived.  These aggregators, such as FriendFeed and SocialThing, allow you to track your feeds across multiple sites.  There has even been a spoof site that aggregates the aggregators.

I love the concept of the News Feed.  I think it is an early implementation of the Implict Web, helping to break down the data silos.  However, I'm now receiving hundreds of feed updates a day.  And with the combination of (1) more users activating feeds and (2) more web sites offering them, I think that feed volume is poised to increase exponentially.  And I can sense that, just like at Infonautics in 1994, the volume will increase to a level that will require 24 hour vigilence to remain informed. 

So, the question I've been thinking a lot about lately is:  What happens next?  How does the feed concept scale -- without forcing people to hire their own netops team to watch the feed.  And I've come to two rough conclusions:

1.  Feeds 2.0 = the feed dashboard

Just like SNMP allowed us to build an automated dashboard to monitor the status of different connected devices, I think it's logical to assume that web services will develop to allow us to monitor the status of connected people.  I'm not talking about a chronological data dump of text like the current 1.0 feed aggregators.  I think there will be applications which aggregate, interpret, and act on feeds.  This dashboard will collect the thousands of feed emails, and determine which require action, which are important, and provide the user with a level of abstraction that currently is not there.

2.  Walmart wants to be your friend

As I've been thinking about the Implict Web, I've seen a variety of technologies/standards (such as APML, Microformats, OpenId, Data Portability, OpenSocial) that are intended to help webservices talk with each other -- and break down the data silos.  However, I think we might get surprised here.

I think the real challenge with respect to the implict (or semantic) web is not technical.  Rather, it has to do with educating (and empowering) the user so they understand the privacy and control issues related to cross-application data sharing. 

And as more and more applications export events into News Feeds, I think we might find that the News Feed becomes a standard for cross-application information delivery.  Rather than trying to build semantic intent into a website or webservice, could we be better off collecting it from the news feeds?  Data is structured in a fairly standard format.  And most importantly, user permissioning and privacy controls are already built into newsfeeds.  Users understand the issue/decision involved in "Bob wants to be your friend".  Is it such a big leap for people to receive "Walmart wants to be your friend" or "Amazon wants to be your friend"?

If web sites -- rather than people -- were subscribers of my news feed, that would break down a whole bunch of silos. 
Amazon already knows how to take advantage of it's onsite user activity to enrich a customer's experience.  But if it can figure out how to utilize a user's offsite activity -- wow!

And privacy is still in the consumers hand -- just like how today I control (1) what goes into my feed and (2) who receives it.  If this vision comes to fruition, I think there the big opportunity is not for the company (or companies) that collects and distributes the feeds.  But rather, the big opportunity is for the company (or companies) that can turn the data into actionable, useful information.   

I'd love to hear your thoughts on this.  (And if there are any companies that are working to create the "SNMP for feeds", please contact me!)

Comments