The concept of information overload is hardly new. But in the age of social media, we’re all inundated with so much information a daily basis that it’s hard to keep up. I know I have trouble keeping up with 233 feeds on a daily basis. I’ve done my best to organize them into logical folders to help me prioritize since I know some days I can’t keep up with everything.
Like many, then, I rely on aggregators to find information that’s valuable to me, especially when I don’t have time to do my own hunting for big news or simply serendipity. But what works better? Man or machine? TechMeme or Scoble?
I visit sites like TechMeme, Google News, Topix, and TailRank, to find stories of interest in traditional and new media. In so doing, I see what’s being talked about a lot and make sure I don’t miss hot topics of conversation. But while the machines — or more accurately the algorithms that were created by the likes of Gabe Rivera, Rich Skrenta, and Kevin Burton — do an excellent job for this purpose, they still have the quirks associated with anything that is computer-generated. Each has its strengths and weaknesses, as well as a unique focus, but they all suffer from a similar flaw: computers can only follow a fixed set of rules and sometimes the best information doesn’t fall into a neat little box.
In an effort to balance man vs. machine, sites like Digg, Reddit,del.icio.us/popular, and others accept user submissions of stories, compile “votes” for the various pieces of content, and then use some sort of algorithm to generate lists of popular items. Personally, I find these sites less valuable than the automatic aggregators because the chosen content tends to be all over the map, without a clear theme or pattern to what becomes popular. In addition, often what becomes popular does so more for amusement than practical value. My interest tends to be more in these sites for business than entertainment, so they don’t serve my own needs quite as well as they may for others.
Despite my love of technology and my affection for automated solutions to information problems, I continue to find the greatest value in aggregation performed by humans. The editorial value that an individual can provide in directing me to timely and accurate information still exceeds that which a computer algorithm can offer.
I began my career almost two decades ago in Washington, DC and there we relied on a daily fax newsletter that summarized key political news in newspapers across the country. In those pre-Web days, National Journal’s Hotline was a “must read” and the only way to digest the latest on campaigns and policy debates around the country. Even today when we can easily access newspapers from across the nation online, the Hotline continues to play an important role in distilling this flood of information into a digestible form.
Of late, I have also come to rely on Robert Scoble’s shared links to help find interesting items that I might otherwise overlook. He says his goal is to help save time for others. Like the automated web sites, his offering isn’t perfect (I don’t share his affinity for cat pictures, for instance), but his interests parallel mine pretty closely and I find a lot of value in it.
For years, Steve Rubel has offered his del.icio.us links as part of his daily blog feed, and those are also a great resource, though much narrower in focus and volume than what Scoble shares. Jason Calacanis just announced he’s getting into this game as well and is in fact trying to create a broader network of like-minded people that will share a combined feed on del.icio.us.
Yet these human solutions all have their own major weakness: the human element! People get busy, have biases, and fall into their own patterns.
So What’s the Answer?
For now, there’s no magic solution to the challenge of information overload. Machines are more timely and consistent, but humans offer greater judgment and more serendipity.
Like many, I will continue to use all of the methods above plus continue to read feeds on my own. The short-lived SearchFox application once tried to bridge this gap by trying to analyze my feed reading behavior to organize information in a priority fashion. It was a good start and I was sorry to see it disappear.
In researching this post, I came across an item from Huw Leslie who wrote an excellent item a few weeks ago in which he talked about a product called Particls which is currently in invitation-only beta. From the way Huw writes about it, it sounds like it has some of the elements I liked about SearchFox, so I definitely want to try it out (anyone have an invite?).
Andy Beard talks about some interesting approaches to the problem, in a section titled “Custom Meme Trackers” in a longer post on productivity. Stowe Boyd touched on the subject not long ago. His points on the importance of networks I thought were part
icularly relevant. < a href="http://www.readwriteweb.com/archives/enterprise_rss_3_vendors.php">Richard MacManus talks about a recent Forrester report on Enterprise RSS that talks about information overload and how some vendors are trying to solve the problem.
I expect this fall’s Defrag conference will touch on some of the issues, challenges, and solutions that impact this arena, but ultimately there’s no single answer out there. The focus on Trust/Attention/Relevance should prove illuminating to this subject matter.
For the entrepreneur who finds a good way to address this problem and smartly combines the best of man and machine in an aggregator, however, fame and fortune await.
UPDATE: Eric Rice has a great post that talks about some of these issues, as well as others: “Lately, I’ve been paying attention to an onslaught of new applications and how they fit into my normal flow of must-read-every-bit-of-information-that-exists-EVER…”