Redeye VC

Josh Kopelman

Managing Director of First Round Capital.

espite being coastally challenged (currently living in Philadelphia), Josh has been an active entrepreneur and investor in the Internet industry since its commercialization. In 1992, while he was a student at the Wharton School of the University of Pennsylvania, Josh co-founded Infonautics Corporation – an Internet information company. In 1996, Infonautics went public on the NASDAQ stock exchange.

Read more or visit First Round Capital

Monthly Archives for 2010

View the older monthly archives »

Feed Frenzy

1929488434_5cca933099 Back in the early 90's, I co-founded a company called Infonautics that ran an online service called Homework Helper.  It operated on Prodigy and AOL -- a few years before the development of the web browser.  We ran our own data center (the Rackspaces of the world didn't exist) and staffed our network operations team 24x7.

I remember being amazed by the network operator's job.  Given the complexity of the system, the network operator would receive dozens of emails per hour, informing him of the status of the various systems and components.  Emails with subjects like "Server load at 87%" and "Query queue at 43" or "Warning: Disk space on Server43 at 95%".  Most of those emails didn't require action, but the network operator had to review them all in order to find the important ones.  In a typical day, I'd guess the netops desk received 2,000 - 4,000 email updates.

By the time I left Infonautics in 1998, the system had evolved.  Instead of the systems reporting to a human via email, we adopted a SNMP dashboard.  This was a piece of software that automatically received (and sometimes acted on) data from the different systems (such as "free memory", "system name", "number of running processes", "default route").  And this level of reporting (and the ability to act on it) eliminated the need for a night-shift network operator.

Fast forward a decade to 2008.

Over the last six months, it seems like every web site is adopting the notion of a "News Feed".  These feeds keep me informed about the status/actions of all my friends and relationships.  I have a Facebook News Feed.  I have a Twitter Feed.  I have a LinkedIn Feed.  And even more recently, a new category of products called Feed Aggregators have arrived.  These aggregators, such as FriendFeed and SocialThing, allow you to track your feeds across multiple sites.  There has even been a spoof site that aggregates the aggregators.

I love the concept of the News Feed.  I think it is an early implementation of the Implict Web, helping to break down the data silos.  However, I'm now receiving hundreds of feed updates a day.  And with the combination of (1) more users activating feeds and (2) more web sites offering them, I think that feed volume is poised to increase exponentially.  And I can sense that, just like at Infonautics in 1994, the volume will increase to a level that will require 24 hour vigilence to remain informed. 

So, the question I've been thinking a lot about lately is:  What happens next?  How does the feed concept scale -- without forcing people to hire their own netops team to watch the feed.  And I've come to two rough conclusions:

1.  Feeds 2.0 = the feed dashboard

Just like SNMP allowed us to build an automated dashboard to monitor the status of different connected devices, I think it's logical to assume that web services will develop to allow us to monitor the status of connected people.  I'm not talking about a chronological data dump of text like the current 1.0 feed aggregators.  I think there will be applications which aggregate, interpret, and act on feeds.  This dashboard will collect the thousands of feed emails, and determine which require action, which are important, and provide the user with a level of abstraction that currently is not there.

2.  Walmart wants to be your friend

As I've been thinking about the Implict Web, I've seen a variety of technologies/standards (such as APML, Microformats, OpenId, Data Portability, OpenSocial) that are intended to help webservices talk with each other -- and break down the data silos.  However, I think we might get surprised here.

I think the real challenge with respect to the implict (or semantic) web is not technical.  Rather, it has to do with educating (and empowering) the user so they understand the privacy and control issues related to cross-application data sharing. 

And as more and more applications export events into News Feeds, I think we might find that the News Feed becomes a standard for cross-application information delivery.  Rather than trying to build semantic intent into a website or webservice, could we be better off collecting it from the news feeds?  Data is structured in a fairly standard format.  And most importantly, user permissioning and privacy controls are already built into newsfeeds.  Users understand the issue/decision involved in "Bob wants to be your friend".  Is it such a big leap for people to receive "Walmart wants to be your friend" or "Amazon wants to be your friend"?

If web sites -- rather than people -- were subscribers of my news feed, that would break down a whole bunch of silos. 
Amazon already knows how to take advantage of it's onsite user activity to enrich a customer's experience.  But if it can figure out how to utilize a user's offsite activity -- wow!

And privacy is still in the consumers hand -- just like how today I control (1) what goes into my feed and (2) who receives it.  If this vision comes to fruition, I think there the big opportunity is not for the company (or companies) that collects and distributes the feeds.  But rather, the big opportunity is for the company (or companies) that can turn the data into actionable, useful information.   

I'd love to hear your thoughts on this.  (And if there are any companies that are working to create the "SNMP for feeds", please contact me!)


Alex Iskold

Hi Josh,

This is interesting and good food for thought.

I have a few comments:

1) Alerts should be done by FriendFeed, we do not need another service for that. The same goes for filters. My assumption is that this is part of their model because otherwise there is not enough going on to build a bigger business

2) My issue with feeds is that they feel static. Its an odd feeling, because I am trying to think why I like APIs more than feeds, but I do. And yet coming back full circle I think you are right -feeds are here and they are simple so this is the way it will play out.

3) I think your idea about big co's eating feeds is huge. This would be perfect thing to do for Netflix, Amazon, etc. as it would only enrich their own information about consumer.

The problems that I see with the idea are:

a) Standardization of data (feeds are not really canonical)

b) Potentially NxM adapters - that is, each big co needs to consume each big co's format.

c) Not seeing the room for startups to innovate in this model.

None of this is a big deal, so I think it will work.

4) Privacy issue - I agree. I think much education is to be done, but likely, early adopters will go first and then the rest will follow as long as there are tangible benefits.

This brings us full-circle. Why would consumers want to give their data before seeing tangible benefits? What are going to be the tangible benefits of implicit web?

Josh Fraser


Interesting post. I definitely agree this is the direction things are going. To me, the biggest challenge is how we can capture intent from all that data.

I think it's obvious that companies could deliver better personalized ads and product recommendations if they were able to understand me better from my various data feeds. The challenge would be knowing when and where to deliver that ad.

Is that personalized ad or product recommendation just another interruption to my life? Or could we actually capture intent and deliver it at just the right moment?

Being able to make product recommendations is nice. Getting it to me at exactly the right moment is the difference between Google's revenue and Facebook's.


Here's a blog post covering the other side of this coin:

In a quantum universe, this problem solves itself. In a Newtonian universe, it's recursive.

Knowing this, the problem isn't: "How can we solve this problem for and within the existing business ecology?", it's: "What changes do we need to make to the way the business ecology works so that this kind of thing is no longer a problem?"



I hope Venture Hacks is adding to your feed problem. =)

I love SNMP for feeds. Why limit it to feeds? Why not email too? Feeds are more structured but email is more biggerer(er).

I would guess SNMP for feeds is part of FriendFeed's vision.

Ilya Grigorik

Hmm, so it sounds like a 'Bloomberg dashboard', albeit for social media.

I guess you could make an argument that Netvibes / Pageflakes are the best approximations of that today, but there is plenty of room for improvement and innovation.

Skip Shuda

Josh - While we wait for some genius to build this, we've begun experimenting with a manual approach to this problem. Using virtual teams of interns to scour social media sites, they extract, organizae and present relevant posts to executives so they can pop up a single document, respond to relevant posts - and the interns take care of posting back to the social media sites.

Simple but effective for time-pressed execs.


Hi Josh, I think just about ready to make the pitch :) I'm working on "SNMP for feeds" and we finally have the SemWeb technicals in order to make it work... Maybe see you soon.

I'm told Lucinda Holt's made a great impression at

Q dub

Aggregating data is the easy and obvious step, but what we really need is a way to organize it--and by organize I don't mean categorize, but prioritize. It's not quite as radical as "a new abstraction", but I suspect this could be done easily by existing aggregators and provide tremendous utility.

Donald Foss

To me the biggest challenge to this is not technical or even the interface, but the monetization. I'm not sure what FriendFeed's plan is yet. This prioritization is relatively easy once your have the ontology worked out, and the SNMP for blogs has been around since 2001. The ruleset and priorization will be costly from the computational point of view.

So who pays for it? Users (probably) won't, and ads will clutter things, especially if compared to FriendFeed and some others. A large number of the target audience--those who could configure and benefit from the filters--block ads.

Josh and Brad, thoughts?


Very useful files search engine. is a search engine designed to search files in various file sharing and uploading sites.

Stephen Sprouse

Josh - While we wait for some genius to build this, we've begun experimenting with a manual approach to this problem. Using virtual teams of interns to scour social media sites, they extract, organizae and present relevant posts to executives so they can pop up a single document, respond to relevant posts - and the interns take care of posting back to the social media sites.

Post a comment

Comments are moderated, and will not appear on this weblog until the author has approved them.

This weblog only allows comments from registered users. To comment, please Sign In.