Tag: bigdata


Empirical Discovery: Concept and Workflow Model

June 20th, 2014 — 12:00am

Concept models are a powerful tool for articulating the essential elements and relationships that define new or complex things we need to understand.  We’ve previously defined empirical discovery as a new method, looking at antecedents, and also comparing and contrasting the distinctive characteristics of Empirical Discovery with other knowledge creation and insight seeking methods.  I’m now sharing our concept model of Empirical Discovery, which identifies the most important actors, activities, and outcomes of empirical discovery efforts, to complement the written definition by illustrating   how the method works in practice.

Empirical discovery concept model from Joe Lamantia

In this model, we illustrate the activities of the three kinds of people most central to discovery efforts: Insight Consumers, Data Scientists, and Data Engineers.  We have robust definitions of all the major actors involved in discovery (used to drive product development), and may share some of these various personas, profiles, and snapshots subsequently.  For reading this model, understand Insight Consumers as the people who rely on insights from discovery efforts to effect and manage the operations of the business.  Data Scientists are the sensemakers who achieve insights, and create data products, and analytical models through discovery efforts.  Data Engineers enable discovery efforts by building the enterprise data analysis infrastructure necessary for discovery, and often implement the outcomes of empirical discovery by building new tools based on the insights and models Data Scientists create.

A key assumption of this model is that discovery is by definition an iterative and serendipitous method, relying on frequent back-steps and unpredictable repetition of activities as a necessary aspect of how discovery efforts unfold.  This model also assumes the data, methods, and tools shift during discovery efforts, in keeping with the evolution of motivating questions, and the achievement of interim outcomes.  Similarly, discovery efforts do not always involve all of these elements.

To keep the essential structure and relationships between elements clear and in the foreground, we have not shown all of the possible iterative loops or repeated steps.  Some closely related concepts are grouped together, to allow reading the model on two levels of detail.

For a simplified view, follow the links between named actors and groups of concepts shown with colored backgrounds and labels.  In this reading, an Insight Consumer articulates questions to a Data Scientist, who combines domain knowledge with the Empirical Discovery Method (yellow) to direct the application of Analytical Tools (blue) and Models (salmon) to Data Sets (green) drawn from Data Sources (magenta).  The Data Scientist shares Insights resulting from discovery efforts with the Insight Consumer, while Data Engineers may implement the models or data products created by the Data Scientist by turning them into tools and infrastructure for the rest of the business.  For a more detailed view of the specific concepts and activities common to Empirical discovery efforts, follow the links between the individual concepts within these named groups.  (Note: there are two kinds of connections; solid arrows indicating definite relationships, and for the Data Sets and Models groups, dashed arrows indicating possible paths of evolution.  More on this to follow)

Another way to interpret the two levels of detail in this model is as descriptions of formal vs. informal implementations of the empirical discovery method.  People and organizations who take a more formal approach to empirical discovery may require explicitly defined artifacts and activities that address each major concept, such as predictions and experimental results.  In less formal approaches, Data Scientists may implicitly address each of the major concepts and activities, such as framing hypotheses, or tracking the states of data sets they are working with, without any formal artifact or decision gateway.  This situational flexibility is follow-on of the applied nature of the empirical discovery method, which does not require scientific standards of proof and reproducibility to generate valued outcomes.

The story begins in the upper right corner, when an Insight Consumer articulates a belief or question to a Data Scientist, who then translates this motivating statement into a planned discovery effort that addresses the business goal. The Data Scientist applies the Empirical Discovery Method (concepts in yellow); possibly generating a hypothesis and accompanying predictions which will be tested by experiments, choosing data from the range of available data sources (grouped in magenta), and selecting initial analytical methods consistent with the domain, the data sets (green), and the analytical or reference models (salmon) they will work with.  Given the particulars of the data and the analytical methods, the Data Scientist employs specific analytical tools (blue) such as algorithms and statistical or other measures, based on factors such as expected accuracy, and speed or ease of use.  As the effort progresses through iterations, or insights emerge, experiments may be added or revised, based on the conclusions the Data Scientist draws from the results and their impact on starting predictions or hypotheses.

For example, an Insight Consumer who works in a product management capacity for an on-line social network with a business goal of increasing users’ level of engagement with the service wishes to identify opportunities to recommend users establish new connections with other similar and possibly known users based on unrecognized affinities in their posted profiles.  The data scientist translates this business goal into a series of experiments investigating predictions about which aspects of user profiles more effectively predict the likelihood of creating new connections in response to system-generated recommendations for similarity.  The Data Scientist frames experiments that rely on data from the accumulated logs of user activities within the network that have been anonymized to comply with privacy policies, selecting specific working sets of data to analyze based on awareness of the shoe and nature of the attributes that appear directly in users’ profiles both across the entire network, and among pools of similar but unconnected users. The Data Scientist plans to begin with analytical methods useful for predictive modeling of the effectiveness of recommender systems in network contexts, such as measurements of the affinity of users’ interests based on semantic analysis of social objects shared by users within this network and also publicly in other online media, and also structural or topological measures of relative position and distance from the field of network science.  The Data Scientist chooses a set of standard social network analysis algorithms and measures, combined with custom models for interpreting user activity and interest unique to this network.  The Data Scientist has predefined scripts and open source libraries available for ready application to data (MLlib, Gephi, Weka, Pandas, etc.) in the form of Analytical tools, which she will combine in sequences according to the desired analytical flow for each experiment.

The nature of analytical engagement with data sets varies during the course of discovery efforts, with different types of data sets playing different roles at specific stages of the discovery workflow.  Our concept map simplifies the lifecycle of data for purposes of description, identifying five distinct and recognizable ways data are used by the Data Scientist, with five corresponding types of data sets.  In some cases, formal criteria on data quality, completeness, accuracy, and content govern which stage of the data lifecycle any  given data set is at.  In most discovery efforts, however, Data Scientists themselves make a series of judgements about when and how the data in hand is suitable for use.  The dashed arrows linking the five types of data sets capture the approximate and conditional nature of these different stages of evolution.  In practice, discovery efforts begin with exploration of data that may or may not be relevant for focused analysis, but which requires some direct engagement to and attention to rule in or out of consideration. Focused analytical investigation of the relevant data follows, made possible by the iterative addition, refinement and transformation (wrangling – more on this in later posts) of the exploratory data in hand.  At this stage, the Data Scientist applies analytical tools identified by their chosen analytical method.  The model building stage seeks to create explicit, formal, and reusable models that articulate the patterns and structures found during investigation.  When validation of newly created analytical models is necessary, the Data Scientist uses appropriate data – typically data that was not part of explicit model creation.  Finally, training data is sometimes necessary to put models into production – either using them for further steps in analytical workflows (which can be very complex), or in business operations outside the analytical context.

Because so much discovery activity requires transformation of the data before or during analysis, there is great interest in the Data Science and business analytics industries in how Data Scientists and sensemakers work with data at these various stages.  Much of this attention focuses on the need for better tools for transforming data in order to make analysis possible.  This model does not explicitly represent wrangling as an activity, because it is not directly a part of the empirical discovery method; transformation is done only as and when needed to make analysis possible.  However, understanding the nature of wrangling and transformation activities is a very important topic for grasping discovery, so I’ll address in later postings. (We have a good model for this too…)

Empirical discovery efforts aim to create one or more of the three types of outcomes shown in orange: insights, models, and data products.  Insights, as we’ve defined them previously, are discoveries that change people’s perspective or understanding, not simply the results of analytical activity, such as the end values of analytical calculations, the generation of reports, or the retrieval and aggregation of stored information.

One of the most valuable outcomes of discovery efforts is the creation of externalized models that describe behavior, structure or relationships in clear and quantified terms.  The models that result from empirical discovery efforts can take many forms — google ‘predictive model’ for a sense of the tremendous variation in what people active in business analytics consider to be a useful model — but their defining characteristic is that a model always describes aspects of a subject of discovery and analysis that are not directly present in the data itself.  For example, if given the node and edge data identifying all of the connections between people in the social network above, one possible model resulting from analysis of the network structure is a descriptive readout of the topology of the network as scale-free, with some set of subgraphs, a range of node centrality values’, a matrix of possible shortest paths between nodes or subgraphs, etc.  It is possible to make sense of, interpret, or circulate a model independently of the data it describes and is derived from.

Data Scientists also engage with models in distinct and recognizable ways during discovery efforts.  Reference models, determined by the domain of investigation, often guide exploratory analysis of discovery subjects by providing Data Scientists with general  explanations and quantifications for processes and relationships common to the domain.  And the models generated as insight and understanding accumulate during discovery evolve in stages from initial articulation through validation to readiness for production implementation; which means being put into effect directly on the operations of the business.

Data products are best understood as ‘packages’ of data which have utility for other analytical or business purposes, such as a list of users in the social network who will form new connections in response to system-generated suggestions of other similar users.  Data products are not literally finished products that the business offers for external sale or consumption.  And as background, we assume operationalization or ‘implementation’ of the outcomes of empirical discovery efforts to change the functioning of the business is the goal of different business processes, such as product development.  While empirical discovery focuses on achieving understanding, rather than making things, this is not the only thing Data Scientists do for the business.  The classic definition of Data Science as aimed at creating new products based on data which impact the business, is a broad mandate, and many of the position descriptions for data science jobs require participation in product development efforts.

Two or more kinds of outcomes are often bundled together as the results of a genuinely successful discovery effort; for example, an insight that two apparently unconnected business processes are in fact related through mutual feedback loops, and a model explicitly describing and quantifying the nature of the relationships as discovered through analysis.

There’s more to the story, but as one trip through the essential elements of empirical discovery, this is a logical point to pause and ask what might be missing from this model? And how can it be improved?

 

Related posts:

1 comment » | Language of Discovery

Data Science and Empirical Discovery: A New Discipline Pioneering a New Analytical Method

March 26th, 2014 — 12:00am

One of the essential patterns of science and industry in the modern era is that new methods for understanding — what I’ll call sensemaking from now on — often emerge hand in hand with new professional and scientific disciplines.  This linkage between new disciplines and new methods follows from the  deceptively simple imperative to realize new types of insight, which often means analysis of new kinds of data, using new techniques, applied from newly defined perspectives. New viewpoints and new ways of understanding are literally bound together in a sort of symbiosis.

One familiar example of this dynamic is the rapid development of statistics during the 18th and 19th centuries, in close parallel with the rise of new social science disciplines including economics (originally political economy) and sociology, and natural sciences such as astronomy and physics.  On a very broad scale, we can see the pattern in the tandem evolution of the scientific method for sensemaking, and the codification of modern scientific disciplines based on precursor fields such as natural history and natural philosophy during the scientific revolution.

Today, we can see this pattern clearly in the simultaneous emergence of Data Science as a new and distinct discipline accompanied by Empirical Discovery, the new sensemaking and analysis method Data Science is pioneering.  Given its dramatic rise to prominence recently, declaring Data Science a new professional discipline should inspire little controversy. Declaring Empirical Discovery a new method may seem bolder, but when we with the essential pattern of new disciplines appearing in tandem with new sensemaking methods in mind, it is more controversial to suggest Data Science is a new discipline that lacks a corresponding new method for sensemaking.  (I would argue it is the method that makes the discipline, not the other way around, but that is a topic for fuller treatment elsewhere)

What is empirical discovery?  While empirical discovery is a new sensemaking method, we can build on two existing foundations to understand its distinguishing characteristics, and help craft an initial definition.  The first of these is an understanding of the empirical method. Consider the following description:

“The empirical method is not sharply defined and is often contrasted with the precision of the experimental method, where data are derived from the systematic manipulation of variables in an experiment.  …The empirical method is generally characterized by the collection of a large amount of data before much speculation as to their significance, or without much idea of what to expect, and is to be contrasted with more theoretical methods in which the collection of empirical data is guided largely by preliminary theoretical exploration of what to expect. The empirical method is necessary in entering hitherto completely unexplored fields, and becomes less purely empirical as the acquired mastery of the field increases. Successful use of an exclusively empirical method demands a higher degree of intuitive ability in the practitioner.”

Data Science as practiced is largely consistent with this picture.  Empirical prerogatives and understandings shape the procedural planning of Data Science efforts, rather than theoretical constructs.  Semi-formal approaches predominate over explicitly codified methods, signaling the importance of intuition.  Data scientists often work with data that is on-hand already from business activity, or data that is newly generated through normal business operations, rather than seeking to acquire wholly new data that is consistent with the design parameters and goals of formal experimental efforts.  Much of the sensemaking activity around data is explicitly exploratory (what I call the ‘panning for gold’ stage of evolution – more on this in subsequent postings), rather than systematic in the manipulation of known variables.  These exploratory techniques are used to address relatively new fields such as the Internet of Things, wearables, and large-scale social graphs and collective activity domains such as instrumented environments and the quantified self.  These new domains of application are not mature in analytical terms; analysts are still working to identify the most effective techniques for yielding insights from data within their bounds.

The second relevant perspective is our understanding of discovery as an activity that is distinct and recognizable in comparison to generalized analysis: from this, we can summarize as sensemaking intended to arrive at novel insights, through exploration and analysis of diverse and dynamic data in an iterative and evolving fashion.

Looking deeper, one specific characteristic of discovery as an activity is the absence of formally articulated statements of belief and expected outcomes at the beginning of most discovery efforts.  Another is the iterative nature of discovery efforts, which can change course in non-linear ways and even ‘backtrack’ on the way to arriving at insights: both the data and the techniques used to analyze data change during discovery efforts.  Formally defined experiments are much more clearly determined from the beginning, and their definition is less open to change during their course. A program of related experiments conducted over time may show iterative adaptation of goals, data and methods, but the individual experiments themselves are not malleable and dynamic in the fashion of discovery.  Discovery’s emphasis on novel insight as preferred outcome is another important characteristic; by contrast, formal experiments are repeatable and verifiable by definition, and the degree of repeatability is a criteria of well-designed experiments.  Discovery efforts often involve an intuitive shift in perspective that is recountable and retraceable in retrospect, but cannot be anticipated.

Building on these two foundations, we can define Empirical Discovery as a hybrid, purposeful, applied, augmented, iterative and serendipitous method for realizing novel insights for business, through analysis of large and diverse data sets.

Let’s look at these facets in more detail.

Empirical discovery primarily addresses the practical goals and audiences of business (or industry), rather than scientific, academic, or theoretical objectives.  This is tremendously important, since  the practical context impacts every aspect of Empirical Discovery.

‘Large and diverse data sets’ reflects the fact that Data Science practitioners engage with Big Data as we currently understand it; situations in which the confluence of data types and volumes exceeds the capabilities of business analytics to practically realize insights in terms of tools, infrastructure, practices, etc.

Empirical discovery uses a rapidly evolving hybridized toolkit, blending a wide range of general and advanced statistical techniques with sophisticated exploratory and analytical methods from a wide variety of sources that includes data mining, natural language processing, machine learning, neural networks, bayesian analysis, and emerging techniques such as topological data analysis and deep learning.

What’s most notable about this hybrid toolkit is that Empirical Discovery does not originate novel analysis techniques, it borrows tools from established disciplines such information retrieval, artificial intelligence, computer science, and the social sciences.  Many of the more specialized or apparently exotic techniques data science and empirical discovery rely on, such as support vector machines, deep learning, or measuring mutual information in data sets, have established histories of usage in academic or other industry settings, and have reached reasonable levels of maturity.  Empirical discovery’s hybrid toolkit is  transposed from one domain of application to another, rather than invented.

Empirical Discovery is an applied method in the same way Data Science is an applied discipline: it originates in and is adapted to business contexts, it focuses on arriving at useful insights to inform business activities, and it is not used to conduct basic research.  At this early stage of development, Empirical Discovery has no independent and articulated theoretical basis and does not (yet) advance a distinct body of knowledge based on theory or practice. All viable disciplines have a body of knowledge, whether formal or informal, and applied disciplines have only their cumulative body of knowledge to distinguish them, so I expect this to change.

Empirical discovery is not only applied, but explicitly purposeful in that it is always set in motion and directed by an agenda from a larger context, typically the specific business goals of the organization acting as a prime mover and funding data science positions and tools.  Data Science practitioners effect Empirical Discovery by making it happen on a daily basis – but wherever there is empirical discovery activity, there is sure to be intentionality from a business view.  For example, even in organizations with a formal hack time policy, our research suggests there is little or no completely undirected or self-directed empirical discovery activity, whether conducted by formally recognized Data Science practitioners, business analysts, or others.

One very important implication of the situational purposefulness of Empirical Discovery is that there is no direct imperative for generating a body of cumulative knowledge through original research: the insights that result from Empirical Discovery efforts are judged by their practical utility in an immediate context.  There is also no explicit scientific burden of proof or verifiability associated with Empirical Discovery within it’s primary context of application.  Many practitioners encourage some aspects of verifiability, for example, by annotating the various sources of data used for their efforts and the transformations involved in wrangling data on the road to insights or data products, but this is not a requirement of the method.  Another implication is that empirical discovery does not adhere to any explicit moral, ethical, or value-based missions that transcend working context.  While Data Scientists often interpret their role as transformative, this is in reference to business.  Data Science is not medicine, for example, with a Hippocratic oath.

Empirical Discovery is an augmented method in that it depends on computing and machine resources to increase human analytical capabilities: It is simply impractical for people to manually undertake many of the analytical techniques common to Data Science.  An important point to remember about augmented methods is that they are not automated; people remain necessary, and it is the combination of human and machine that is effective at yielding insights.  In the problem domain of discovery, the patterns of sensemaking activity leading to insight are intuitive, non-linear, and associative; activites with these characteristics are not fully automatable with current technology. And while many analytical techniques can be usefully automated within boundaries, these tasks typically make up just a portion of an complete discovery effort.  For example, using latent class analysis to explore a machine-sampled subset of a larger data corpus is task-specific automation complementing human perspective at particular points of the Empirical Discovery workflow.  This dependence on machine augmented analytical capability is recent within the history of analytical methods.  In most of the modern era — roughly the later 17th, 18th, 19th and early 20th centuries — the data employed in discovery efforts was manageable ‘by hand’, even when using the newest mathematical and analytical methods emerging at the time.  This remained true until the effective commercialization of machine computing ended the need for human computers as a recognized role in the middle of the 20th century.

The reality of most analytical efforts — even those with good initial definition — is that insights often emerge in response to and in tandem with changing and evolving questions which were not identified, or perhaps not even understood, at the outset.  During discovery efforts, analytical goals and techniques, as well as the data under consideration, often shift in unpredictable ways, making the path to insight dynamic and non-linear.  Further, the sources of and inspirations for insight are  difficult or impossible to identify both at the time and in retrospect. Empirical discovery addresses the complex and opaque nature of discovery with iteration and adaptation, which combine  to set the stage for serendipity.

With this initial definition of Empirical Discovery in hand, the natural question is what this means for Data Science and business analytics?  Three thigns stand out for me.  First, I think one of the central roles played by Data Science is in pioneering the application of existing analytical methods from specialized domains to serve general business goals and perspectives, seeking effective ways to work with the new types (graph, sensor, social, etc.) and tremendous volumes (yotta, yotta, yotta…) of business data at hand in the Big Data moment and realize insights

Second, following from this, Empirical Discovery is methodological a framework within and through which a great variety of analytical techniques at differing levels of maturity and from other disciplines are vetted for business analytical utility in iterative fashion by Data Science practitioners.

And third, it seems this vetting function is deliberately part of the makeup of empirical discovery, which I consider a very clever way to create a feedback loop that enhances Data Science practice by using Empirical Discovery as a discovery tool for refining its own methods.

Related posts:

Comment » | Big Data, Enterprise, Language of Discovery

Big Data is a Condition (Or, “It’s (Mostly) In Your Head”)

March 10th, 2014 — 12:00am

Unsurprisingly, definitions of Big Data run the gamut from the turgid to the flip, making room to include the trite, the breathless, and the simply un-inspiring in the big circle around the campfire. Some of these definitions are useful in part, but none of them captures the essence of the matter. Most are mistakes in kind, trying to ground and capture Big Data as a ‘thing’ of some sort that is measurable in objective terms. Anytime you encounter a number, this is the school of thought.

Some approach Big Data as a state of being, most often a simple operational state of insufficiency of some kind; typically resources like analysts, compute power or storage for handling data effectively; occasionally something less quantifiable like clarity of purpose and criteria for management. Anytime you encounter phrasing that relies on the reader to interpret and define the particulars of the insufficiency, this is the school of thought.

I see Big Data as a self-defined (perhaps diagnosed is more accurate) condition, but one that is based on idiosyncratic interpretation of current and possible future situations in which understanding of, planning for, and activity around data are central.

Here’s my working definition: Big Data is the condition in which very high actual or expected difficulty in working successfully with data combines with very high anticipated but unknown value and benefit, leading to the a-priori assumption that currently available information management and analytical capabilties are broadly insufficient, making new and previously unknown capabilities seemingly necessary.

Related posts:

Comment » | Big Data, Enterprise, Language of Discovery

Strata New York Video: Designing Big Data Interactions With the Language of Discovery

December 6th, 2013 — 12:00am

I’m late to making it available here, but O’Reilly media published the video recording of my presentation on The Language of Discovery: A Toolkit For Designing Big Data Interactions from last year’s (2012) Strata conference in NY.

Looking back at this, I’m happy to say that while my thinking on several of the key ideas has advanced quite a bit in the past 12 months (see our more recent materials), the core ideas and concepts remain vital.

Those are, briefly:

  • Big Data is useless unless people can engage with it effectively
  • Discovery is a critical and inadequately acknowledged aspect of sense making that is core to realizing value from Big Data
  • Discovery is literally the most important human/machine interaction in the emerging Age of Insight
  • Providing discovery capability requires understanding people’s needs and goals
  • The Language of Discovery is an effective tool for understanding discovery needs and activities, and designing solutions
  • There are known patterns and structure in discovery activities that you can use to create discovery solutions

I’ve posted it to vimeo for easier viewing – slides are here /user-experience-ux/strata-new-york-slides-new-discovery-patterns for those who wish to follow along – enjoy!

Comment » | Language of Discovery

Discovery and the Age of Insight

August 21st, 2013 — 12:00am

Several weeks ago, I was invited to speak to an audience of IT and business leaders at Walmart about the Language of Discovery.   Every presentation is a feedback opportunity as much as a chance to broadcast our latest thinking (a tenet of what I call lean strategy practice – musicians call it trying out new material), so I make a point to share evolving ideas and synthesize what we’ve learned since the last instance of public dialog.

For the audience at Walmart, as part of the broader framing for the Age of Insight, I took the opportunity to share findings from some of the recent research we’ve done on Data Science (that’s right, we’re studying data science).  We’ve engaged consistently with data science practitioners for several years now (some of the field’s leaders are alumni of Endeca), as part of our ongoing effort to understand the changing nature of analytical and sense making activities, the people undertaking them, and the contexts in which they take place.  We’ve seen the discipline emerge from an esoteric specialty into full mainstream visibility for the business community.  Interpreting what we’ve learned about data science through a structural and historic perspective lead me to draw a broad parallel between data science now and natural philosophy at its early stages of evolution.

We also shared some exciting new models for enterprise information engagement; crafting scenarios using the language of discovery to describe information needs and activity at the level of discovery architecture, IT portfolio planning,  and knowledge management (which correspond to UX, technology, and business perspectives as applied to larger scales and via business dialog) – demonstrating the versatility of the language as a source of linkage across separate disciplines.

But the primary message I wanted to share is that discovery is the most important organizational capability for the era.  More on this in follow up postings that focus on smaller chunks of the thinking encapsulated in the full deck of slides.

Discovery and the Age of Insight: Walmart EIM Open House 2013 from Joe Lamantia

Comment » | Language of Discovery

Big Data Is Not the Insight: Slides From Enterprise Search Europe

May 21st, 2013 — 12:00am

Slides from my talk Big Data Is Not the Insight: The Language of Discovery at Enterprise Search Europe in London last week are available for viewing and download from slideshare. The conference was a good gathering of leading perspectives on search in Europe, definitely one I’d look forward to attending again. And of course London is lovely in May, even when it feels more like winter than spring…

Big Data Is Not the Insight: The Language Of Discovery: from Joe Lamantia

Related posts:

1 comment » | Language of Discovery, User Experience (UX), User Research

The Architecture of Discovery: Slides from Discover Conference 2011

April 16th, 2011 — 12:00am

Endeca invites customers, partners and leading members of the broader search and discovery technology and solutions communities to meet annually, and showcase the most interesting and exciting work in the field of discovery.  As lead for the UX team that designs Endeca’s discovery products, I shared some of our recent work on patterns in the structure of discovery applications, as well as best practices in information design and visualization that we use to drive product definition and design for Endeca’s Latitude Discovery Framework.

This material is useful for program and project managers and business analysts defining requirements for discovery solutions and applications, UX and system architects crafting high-level structures and addressing long-term growth, interaction designers and technical developers defining and building information workspaces at a fine grain, and

There are three major sections: the first presents some of our tools for identifying and understanding people’s needs and goals for discovery in terms of activity (the Language of Discovery as we call it), the second brings together screen-level, application level, and user scenario / use-case level patterns we’ve observed in the applications created to meet those needs, and the final section shares condensed best practices and fundamental principles for information design and visualization based on academic research disciplines such as cognitive science and information retrieval.

It’s no coincidence that these sections reflect the application of the core UX disciplines of user research, information architecture, and interaction design to the question of “who will need to encounter information for some end, and in what kind of experience will they encounter it”.  This flow and ordering is deliberate; it demonstrates on two levels the results of our own efforts applying the UX perspective to the questions inherent in creating discovery tools, and shares some of the tools, insights, templates, and resources we use to shape the platform used to create discovery experiences across diverse industries.

Session outline

Session description

“How can you harness the power and flexibility of Latitude to create useful, usable, and compelling discovery applications for enterprise discovery workers? This session goes beyond the technology to explore how you can apply fundamental principles of information design and visualization, analytics best practices and user interface design patterns to compose effective and compelling discovery applications that optimize user discovery, success, engagement, & adoption.”

The patterns are product specific in that they show how to compose screens and applications using the predefined components in the Discovery Framework library.  However, many of the product-specific components are built to address common or recurring needs for interaction with information via well-known mechanisms such as search, filtering, navigation, visualization, and presentation of data.  In other words, even if you’re not using the literal Discovery Framework component library to compose your specific information analysis workspace, you’ll find these patterns relevant at workspace and application levels of scale.

The deeper story of these patterns is in demonstrating the evolution of discovery and analysis applications over time.  Typically, discovery applications begin by offering users a general-purpose workspace that satisfies a wide range of interaction tasks in an approximate fashion.  Over time, via successive expansions in the the scope and variety of data they present, and the discovery and analysis capabilities they provide, discovery applications grow to include several different types of workspaces that individually address distinct sets of needs for visualization and sense making by using very different combinations of components.  As a composite, these functional and informationally diverse workspaces span the full range of interaction needs for differing types of users.

I hope you find this toolkit and collection of patterns and information design principles useful.  What are some of the resources you’re using to take on these challenges?

User Experience Architecture For Discovery Applications from Joe Lamantia

Comment » | Dashboards & Portals, Information Architecture, User Experience (UX)

Back to top