(Re purposed from the blog post on http://www.opencalais.com/node/9501)
The Gist: Release 4 of Calais will be a big deal. In that release we’ll go beyond the ability to extract semantic data from your content. We will link that extracted semantic data to datasets from dozens of other information sources, from Wikipedia to Freebase to the CIA World Fact Book. In short – instead of being limited to the contents of the document you’re processing, you’ll be able to develop solutions that leverage a large and rapidly growing information asset: the Linked Data Cloud.
The goal of this post is just to give our community a heads-up to start thinking and planning.
During the course of 2008 we’ve had three significant releases of Calais, with additional point releases nearly each month along the way. We’ve added new knowledge domains, improved performance, delivered integration with a range of tools and developed new user-facing applications. It’s been a year of amazing growth in our developer community and the capabilities of the Calais service.
While every previous release has accomplished something significant, Release 4 is going to introduce something that we think is game changing – and that’s life in the Linked Data cloud. It’s important enough that we want to give all the members of our community time to think about it, prepare for it and get your brains in gear on how you might use it.
Every release of Calais up to this point has focused on meeting the need to extract semantic information from text. Release 4 builds on this by creating the ability to harvest the Linked Data cloud using that semantic data.
For this all to make sense we need to introduce a few things. If you already know about de-referenceable URIs and the Linked Data cloud – skim ahead. If not – please take a moment to ingest the background you need.
When you send text to Calais it returns several things: entities, facts, events and categories. For purposes of today’s discussion we’re going to focus in on entities. Entities are just what they sound like – they are things. Some specific examples are people, companies, organizations, geographies, sports teams and music albums.
When Calais extracts an entity from your text it returns (at least) a few things. It tells you the name of the entity and it tells you what type of entity it is. Unlike other extraction services we don’t just return a list of things – Calais tells you it found a thing of type=Company and a value=IBM or type=Person and value=Jane Doe. But – there’s something else Calais returns that hasn’t meant very much up until now: it returns a Uniform Resource Identifier (URI) for that entity. There’s nothing magic about URIs – they are simply a unique identifier for every entity that Calais discovers. Here’s an example (it’s not pretty) of what the URI for the Company IBM looks like:
Well, that doesn’t look very useful does it? If you were to pull up that URI (when Release 4 is out) all you’d see is RDF with links to places called DBpedia and Freebase and Reuters. But keep those links in mind: they’re the key to a whole new world.
Linked Data is the name of a movement underway (not too surprisingly, initiated by Sir Tim Berners-Lee) that sets a standard and expected behavior for publishing and connecting data on the web. This isn’t about publishing web pages – this is about turning those web pages into data that’s accessible to programs to work with. We’ll give you a quick example to make it real: Wikipedia is one of the single largest sets of information across a broad range of topics in the world. It’s really great if I’m a person who’s casually looking for information on a particular topic – but it’s not so great if I’m a computer program that wants to use that data. Why? Because it’s formatted and organized for people – not computers – to read.
But Wikipedia has a twin – in fact a Linked Data twin – called DBpedia. DBpedia has the same structured information as Wikipedia – but translated into a machine-readable format called RDF and accessible via the Linked Data standards. And, Wikipedia is not alone. A growing cloud of information sets from DBpedia to the CIA World Fact Book to U.S. Census data to Musicbrainz – and many others – is becoming available. What’s important is that this cloud is 1) growing, and 2) interoperable. There are “pointers” from entries in DBpedia to entries in Musicbrainz and back to entries in Geonames – it’s another big Web – but this time it’s a Web of Data.
So – lots of words and arcane concepts. Let’s try to bring it all together into something that makes sense. We’ll put one sentence out there – and then we’ll give a few examples.
Beginning with Calais Release 4 you and the programs you develop will be able to go from many of the entities Calais extracts directly to the Linked Data Cloud.
A simple example:
I want to process today’s business news. For each article I want to extract all of the companies mentioned – but only if the article also mentions a merger or acquisition. I am only interested in companies whose headquarters (or those of their subsidiaries) are located in New York State. Do all of that and give me a widget for my news site titled “Merger Activity for NY Consulting Companies”. And oh, by the way, this isn’t a research project – I want you to do it real time for the 10,000 pieces of news I process every day.
How would you do that? Option 1 is to hire a bunch of researchers, give them a fast internet connection and teach them to type very very fast. Option 2 is to write some code that looks like this:
For each Article
Submit to Calais, get response
If MergerAcquisition exists then
For each Company
Retrieve Calais Company URI, extract DBpedia link
Send Linked Data inquiry to DBpedia, get response
If CompanyIndustry contains “Consulting”
If CompanyHeadquarters = “New York”
Put them on the list
For each subsidiary
Send Linked Data query to Dbpedia, get result
If CompanyHeadquarters = “New York”
Put them on the list
(lots of endif’s)
Print the list
That really is a pretty straightforward example. How about companies in the news with at least one subsidiary doing business in an area that the CIA Factbook considers dangerous? Or books released by authors who attended Harvard who live in Ohio? Or … . We think you get the idea.
So. The summary. The combination of semantic data extraction (generic extraction, tags, keywords won’t do the trick) + de-referenceable URIs (entity identifiers you and your programs can retrieve) + the Linked Data Cloud = amazing stuff.
We’d like you to start thinking about it.