Computer scientists envision a smarter Web

November 12, 2006|By New York Times News Service

SAN FRANCISCO -- From the billions of documents that form the World Wide Web and the links that weave them together, computer scientists and a growing collection of startup companies are finding new ways to mine human intelligence.

Their goal is to add a layer of meaning on top of the existing Web that would make it less of a catalog and more of a guide - and even provide the foundation for systems that can reason in a human fashion. That level of artificial intelligence, with machines doing the thinking instead of simply following commands, has eluded researchers for more than half a century.

Referred to as Web 3.0, the effort is in its infancy, and the idea has given rise to skeptics who have called it an unobtainable vision. But the underlying technologies are rapidly gaining adherents, at big companies like IBM and Google as well as small ones. Their projects often center on simple, practical uses, from producing vacation recommendations to predicting the next hit song.

But in the future, more powerful systems could act as personal advisers in areas as diverse as financial planning, with an intelligent system mapping out a retirement plan for a couple, for instance, or educational consulting, with the Web helping a high school student identify the right college.

The projects aimed at creating Web 3.0 all take advantage of increasingly powerful computers that can quickly and completely scour the Web.

"I call it the World Wide Database," said Nova Spivack, the founder of a startup firm whose technology detects relationships between nuggets of information mining the World Wide Web. "We are going from a Web of connected documents to a Web of connected data."

Web 2.0, which describes the ability to seamlessly connect applications (like geographical mapping) and services (like photo sharing) over the Internet, has in recent months become the focus of dot-com-style hype in Silicon Valley. But commercial interest in Web 3.0 - or the "semantic Web," for the idea of adding meaning - is only now emerging.

The classic example of the Web 2.0 era is the "mash-up" - for example, connecting a rental-housing Web site with Google Maps to create a new, more useful service that automatically shows the location of each rental listing.

In contrast, the Holy Grail for developers of the semantic Web is to build a system that can give a reasonable and complete response to a simple question like: "I'm looking for a warm place to vacation, and I have a budget of $3,000."

Under today's system, such a query can lead to hours of sifting - through lists of flights, hotel, car rentals - and the options are often at odds with one another. Under Web 3.0, the same search would ideally call up a complete vacation package that was planned as meticulously as if it had been assembled by a human travel agent.

How such systems will be built, and how soon they will begin providing meaningful answers, is a matter of vigorous debate among academic researchers and commercial technologists. Some are focused on creating a vast new structure to supplant the existing Web; others are developing pragmatic tools that extract meaning from the existing Web.

But all agree that if such systems emerge, they will instantly become more commercially valuable than today's search engines, which return thousands or even millions of documents but as a rule do not answer questions directly.

Researchers are pushing further. Spivack's company, Radar Networks, for example, is one of several working to exploit the content of social computing sites, which enable users to collaborate in gathering and adding their thoughts to an array of content, such as travel and movies.

Baltimore Sun Articles
|
|
|
Please note the green-lined linked article text has been applied commercially without any involvement from our newsroom editors, reporters or any other editorial staff.