Thursday evening I participated in a panel for the Boston Product Management Association. John Cass moderated a conversation among the audience and me, Sean Martin of Cambridge Semantics and Mike Spataro of Visible Technologies. We had excellent coverage of both academic and practical aspects of semantic technologies of all kinds.
By all kinds, I mean to say that one of the biggest memes of the evening was the differences between and complementary uses of what was termed "little s" and "Big S" semantics. Now, I'm not entirely sure I like this distinction, it reminds me of other unnecessary debates such as "Big IA" and "little IA" for information architects, and has some negative connotations. "Little s" was coined to contain natural language processing and text analytics - machine based semantics. "Big S" was coined to contain the open standards promoted by the W3C and others: RDF, OWL, SPARQL and the broader family of modeling and markup capabilities defined by cross-industry working groups.
John's post is an excellent overview of the insights provided by each panelist.
Mike gave the group a great overview of semantic analysis for social media - using text analysis, algorithms, sentiment analysis and other NLP techniques for customer monitoring. We had great questions from the audience around competitive intelligence too, and if these techniques could support gathering of intelligence.
Sean provided one of the clearest explanations of semantic technologies from the open standards perspective that I have heard in some time. He gave a history and overviews of the key standards developed by the W3 which are important for the audience to be aware of: RDF, OWL and SPARQL. Critically, he didn't overwhelm the audience with information on every working group and variant on these standards. One key point I hope the audience took away is the idea that on the semantic web, a machine can be the consuming agent, whereas on the current web, a human has to be the consumer.
I shared my long-held belief that you have to have human-machine hybrid system: using the best of NLP and standards based models. For example, using entity extraction tools for finding people or companies, and then using data modeled in an ontology to display the information stored as triples relating to the entity extracted.
As for myself, I was much more interested in the questions coming from the audience - useful insight for individuals and organizations contemplating entering or expanding in this space. We discussed customer relationship management, competitive intelligence, sentiment analysis; how to match an extracted entity to an ontology; how semantic technologies can improve web analytics; how semantic technologies can tip decision makers from being data aggregators who need to make decisions to empowered executives who simply need to choose an action from a small set of options.
The big question of the night was "how do we get started?" "We're understaffed, underfunded and have little time. How can we begin applying these tools to gain competitive advantage?" The panel recommended simple techniques such as adding RDFa to web pages, creating a small corporate profile to publish as linked data, and one step at-a-time adding semantic capabilities to existing systems, or purchasing new systems that support semantic technologies.
I promised several people some resources; I will gather and post articles and journals in another entry, but I would like to point out some excellent resources for finding tools: Sweet Tools (Sem Web) from Mike Bergman. Honestly, the site is overflowing with great data - research, a glossary, a timeline - and is worth spending time at. I know you'll find it useful.
Thanks to John for putting together a great session - it's gratifying to see interest growing. Thanks also to BPMA for their interest, and to Oracle for hosting the event. I hope many of us continue the conversation.