Tag: semanticweb


On Semantics At The Enterprise Level

September 14th, 2005 — 12:00am

In the same way that information architecture helps take users’ understandings of the structure, meaning, and organization of information into account at the level of domain-specific user experiences, information spaces, and systems, the complex semantic boundaries and relationships that define and link enterprise-level domains is a natural area of activity for enterprise information architecture.
Looking for some technically oriented materials related to this level of IA – what I call enterprise semantic frameworks – I came across a solid article titled Enterprise Semantics: Aligning Service-Oriented Architecture with the Business in the Web Services Journal.
The authors – Joram Borenstein and Joshua Fox – take a web-services perspective on the business benefits of enterprise-level semantic efforts, but they do a good job of laying out the case for the importance of semantic concepts, understanding, and alignment at the enterprise level.
From the article abtract:
“Enterprises need transparency, a clear view of what is happening in the organization. They also need agility, which is the ability to respond quickly to changes in the internal and external environments. Finally, organizations require integration: the smooth interoperation of applications across organizational boundaries. Encoding business concepts in a formal semantic model helps to achieve these goals and also results in additional corollary benefits. This semantic model serves as a focal point and enables automated discovery and transformation services in an organization.”
They also offer some references at the conclusion of the article:

  • Borenstein, J. and , J. (2003). “Semantic Discovery for Web Services.” Web Services Journal. SYS-CON Publications, Inc. Vol. 3, issue 4. www.sys-con.com/webservices/articleprint.cfm?id=507
  • Cowles, P. (2005). “Web Service API and the Semantic Web.” Web Services Journal. SYS-CON Publications, Inc. Vol. 5, issue 2. www.sys-con.com/story/?storyid=39631&DE=1
  • Genovese, Y., Hayword, S., and Comport, J. (2004). “SOA Will Demand Re-engineering of Business Applications.” Gartner. October 8.
  • Linthicum, D. (2005). “When Building Your SOA…Service Descriptions Are Key.” WebServices.Org. March 2005. www.webservices.org/ws/content/view/full/56944
  • Schulte, R.W., Valdes, R., and Andrews, W. (2004). “SOA and Web Services Offer Little Vendor Independence.” Gartner. April 8.
  • W3C Web Services Architecture Working Group: www.w3.org/2002/ws/arch/

Related posts:

Comment » | Architecture, Information Architecture, Modeling

Concept Maps: Training Children to Build Ontologies?

May 31st, 2005 — 12:00am

Concept maps popped onto the radar last week when an article in Wired highlighted a concept mapping tool called Cmap. Cmap is one of a variety of concept mapping tools that’s in use in schools and other educational settings to teach children to model the structure and relationships connecting – well – concepts.
The root idea of using concept mapping in educational settings is to move away from static models of knowledge, and toward dynamic models of relationships between concepts that allow new kinds of reasoning, understanding, and knowledge. That sounds a lot like the purpose of OWL.
It might be a stretch to say that by advocating concept maps, schools are in fact training kids to create ontologies as a basic learning and teaching method, and a vehicle for communicating complex ideas – but it’s a very interesting stretch all the same. As Information Architects, we’re familiar with the ways that structured visualizations of interconnected things – pages, topics, functions, etc. – communicate complex notions quickly and more effectively than words. But most of the rest of the world doesn’t think and communicate this way – or at least isn’t consciously aware that it does.
It seems reasonable that kids who learn to think in terms of concept maps from an early age might start using them to directly communicate their understandings of all kinds of things throughout life. It might be a great way to communicate the complex thoughts and ideas at play when answering a simple question like “What do you think about the war in Iraq?”
Author Nancy Kress explores this excact idea in the science fiction novel ‘Beggars In Spain’, calling the constructions “thought strings”. In Kress’ book, thought strings are the preferred method of communcation for extremely intelligent genetically engineered children, who have in effect moved to realms of cognitive complexity that exceed the structural capacity of ordinary languages. As Kress describes them, the density and multidimensional nature of thought strings makes it much easier to share nuanced understandings of extremely complex domains, ideas, and situations in a compact way.
I’ve only read the first novel in the trilogy, so I can’t speak to how Kress develops the idea of thought strings, but there’s a clear connection between the construct she defines and the concept map as laid out by Novak, who says, “it is best to construct concept maps with reference to some particular question we seek to answer or some situation or event that we are trying to understand”.
Excerpts from the Wired article:
“Concept maps can be used to assess student knowledge, encourage thinking and problem solving instead of rote learning, organize information for writing projects and help teachers write new curricula. “
“We need to move education from a memorizing system and repetitive system to a dynamic system,” said Gaspar Tarte, who is spearheading education reform in Panama as the country’s secretary of governmental innovation.”
“We would like to use tools and a methodology that helps children construct knowledge,” Tarte said. “Concept maps was the best tool that we found.”

Related posts:

Comment » | Modeling, Semantic Web

mSpace Online Demo

February 20th, 2005 — 12:00am

There’s an mSpace demo online.

Related posts:

Comment » | Modeling, Semantic Web, User Experience (UX)

mSpace: A New (Usable?) Semantic Web Interface

February 18th, 2005 — 12:00am

mSpace is a new framework – including user interface – for interacting with semantically structured information that appeared on Slashdot this morning.
According to the supporting literature, mSpace handles both ontologically structured data, and RDF based information that is not modelled with ontologies.
What is potentially most valuable about the mSpace framework is a useful, usable interface for both navigating / exploring RDF-based information spaces, and editing them.
From the mSpace sourceforge site:
“mSpace is an interaction model designed to allow a user to navigate in a meaningful manner the multi-dimensional space that an ontology can provide. mSpace offers potentially useful slices through this space by selection of ontological categories.
mSpace is fully generalised and as such, with a little definition, can be used to explore any knowledge base (without the requirement of ontologies!).
Please see mspace.ecs.soton.ac.uk for more information.”
From the abstract of the Technical report, titled mSpace: exploring the Semantic Web
“Information on the web is traditionally accessed through keyword searching. This method is powerful in the hands of a user that is experienced in the domain they wish to acquire knowledge within. Domain exploration is a more difficult task in the current environment for a user who does not precisely understand the information they are seeking. Semantic Web technologies can be used to represent a complex information space, allowing the exploration of data through more powerful methods than text search. Ontologies and RDF data can be used to represent rich domains, but can have a high barrier to entry in terms of application or data creation cost.
The mSpace interaction model describes a method of easily representing meaningful slices through these multidimensional spaces. This paper describes the design and creation of a system that implements the mSpace interaction model in a fashion that allows it to be applied across almost any set of RDF data with minimal reconfiguration. The system has no requirement for ontological support, but can make use of it if available. This allows the visualisation of existing non-semantic data with minimal cost, without sacrificing the ability to utilise the power that semantically-enabled data can provide.”

Related posts:

Comment » | Modeling, Semantic Web, User Experience (UX)

Two Surveys of Ontology / Taxonomy / Thesaurus Editors

February 18th, 2005 — 12:00am

While researching and evaluating user interfaces and management tools for semantic structures – ontologies, taxonomies, thesauri, etc – I’ve come across or been directed to two good surveys of tools.
The first, courtesy of HP Labs and the SIMILE project is Review of existing tools for working with schemas, metadata, and thesauri. Thanks to Will Evans for pointing this out.
The second is a comprehensive review of nearly 100 ontology editors, or applications offering ontology editing capabilities, put together by Michael Denny at XML.com. You can read the full article Ontology Building: A Survey of Editing Tools, or go directly to the Summary Table of Survey Results.
The original date for this is 2002 – it was updated July of 2004.

Related posts:

Comment » | Modeling, Semantic Web, User Experience (UX)

Tim Bray and the RDF Challenge: Poor Tools Are A Barrier For The Semantic Web

February 7th, 2005 — 12:00am

In the latest issue of ACMQueue, Tim Bray is interviewed about his career path and early involvement with the SGML and XML standards. While recounting, Bray makes four points about the slow pace of adoption for RDF, and reiterates his conviction that the current quality of RDF-based tools is an obstacle to their adoption and the success of the Semantic Web.
Here are Bray’s points, with some commentary based on recent experiences with RDF and OWL based ontology management tools.
1. Motivating people to provide metadata is difficult. Bray says, “If there’s one thing we’ve learned, it’s that there’s no such thing as cheap meta-data.”
This is plainly a problem in spaces much beyond RDF. I hold the concept and the label meta-data itself partly responsible, since the term meta-data explicitly separates the descriptive/referential information from the idea of the data itself. I wager that user adoption of meta-data tools and processes will increase as soon as we stop dissociating a complete package into two distinct things, with different implied levels of effort and value. I’m not sure what a unified label for the base level unit construct made of meta-data and source data would be (an asset maybe?), but the implied devaluation of meta-data as an optional or supplemental element means that the time and effort demands of accurate and comprehensive tagging seem onerous to many users and businesses. Thus the proliferation of automated taxonomy and categorization generation tools…
2. Inference based processing is ineffective. Bray says, “Inferring meta-data doesn’t work… Inferring meta-data by natural language processing has always been expensive and flaky with a poor return on investment.”
I think this isn’t specific enough to agree with without qualification. However, I have seen analysis of a number of inferrencing systems, and they tend to be slow, especially when processing and updating large RDF graphs. I’m not a systems architect or an engineer, but it does seem that none of the various solutions now available directly solves the problem of allowing rapid, real-time inferrencing. This is an issue with structures that change frequently, or during high-intensity periods of the ontology life-cycle, such as initial build and editorial review.
3. Bray says, “To this day, I remain fairly unconvinced of the core Semantic Web proposition. I own the domain name RDF.net. I’ve offered the world the RDF.net challenge, which is that for anybody who can build an actual RDF-based application that I want to use more than once or twice a week, I’ll give them RDF.net. I announced that in May 2003, and nothing has come close.”
Again, I think this needs some clarification, but it brings out a serious potential barrier to the success of RDF and the Semantic Web by showcasing the poor quality of existing tools as a direct negative influencer on user satisfaction. I’ve heard this from users working with both commercial and home-built semantic structure management tools, and at all levels of usage from core to occasional.
To this I would add the idea that RDF was meant for interpretation by machines not people, and as a consequence the basic user experience paradigms for displaying and manipulating large RDF graphs and other semantic constructs remain unresolved. Mozilla and Netscape did wonders to make the WWW apparent in a visceral and tangible fashion; I suspect RDF may need the same to really take off and enter the realm of the less-than-abstruse.
4. RDF was not intended to be a Knowledge Representation language. Bray says, “My original version of RDF was as a general-purpose meta-data interchange facility. I hadn’t seen that it was going to be the basis for a general-purpose KR version of the world.”
This sounds a bit like a warning, or at least a strong admonition against reaching too far. OWL and variants are new (relatively), so it’s too early to tell if Bray is right about the scope and ambition of the Semantic Web effort being too great. But it does point out that the context of the standard bears heavily on its eventual functional achievement when put into effect. If RDF was never meant to bear its current load, then it’s not a surprise that an effective suite of RDF tools remains unavailable.

Related posts:

Comment » | Semantic Web, Tools

Back to top