Category: Enterprise


Data Science and Empirical Discovery: A New Discipline Pioneering a New Analytical Method

March 26th, 2014 — 12:00am

One of the essential patterns of science and industry in the modern era is that new methods for understanding — what I’ll call sensemaking from now on — often emerge hand in hand with new professional and scientific disciplines.  This linkage between new disciplines and new methods follows from the  deceptively simple imperative to realize new types of insight, which often means analysis of new kinds of data, using new techniques, applied from newly defined perspectives. New viewpoints and new ways of understanding are literally bound together in a sort of symbiosis.

One familiar example of this dynamic is the rapid development of statistics during the 18th and 19th centuries, in close parallel with the rise of new social science disciplines including economics (originally political economy) and sociology, and natural sciences such as astronomy and physics.  On a very broad scale, we can see the pattern in the tandem evolution of the scientific method for sensemaking, and the codification of modern scientific disciplines based on precursor fields such as natural history and natural philosophy during the scientific revolution.

Today, we can see this pattern clearly in the simultaneous emergence of Data Science as a new and distinct discipline accompanied by Empirical Discovery, the new sensemaking and analysis method Data Science is pioneering.  Given its dramatic rise to prominence recently, declaring Data Science a new professional discipline should inspire little controversy. Declaring Empirical Discovery a new method may seem bolder, but when we with the essential pattern of new disciplines appearing in tandem with new sensemaking methods in mind, it is more controversial to suggest Data Science is a new discipline that lacks a corresponding new method for sensemaking.  (I would argue it is the method that makes the discipline, not the other way around, but that is a topic for fuller treatment elsewhere)

What is empirical discovery?  While empirical discovery is a new sensemaking method, we can build on two existing foundations to understand its distinguishing characteristics, and help craft an initial definition.  The first of these is an understanding of the empirical method. Consider the following description:

“The empirical method is not sharply defined and is often contrasted with the precision of the experimental method, where data are derived from the systematic manipulation of variables in an experiment.  …The empirical method is generally characterized by the collection of a large amount of data before much speculation as to their significance, or without much idea of what to expect, and is to be contrasted with more theoretical methods in which the collection of empirical data is guided largely by preliminary theoretical exploration of what to expect. The empirical method is necessary in entering hitherto completely unexplored fields, and becomes less purely empirical as the acquired mastery of the field increases. Successful use of an exclusively empirical method demands a higher degree of intuitive ability in the practitioner.”

Data Science as practiced is largely consistent with this picture.  Empirical prerogatives and understandings shape the procedural planning of Data Science efforts, rather than theoretical constructs.  Semi-formal approaches predominate over explicitly codified methods, signaling the importance of intuition.  Data scientists often work with data that is on-hand already from business activity, or data that is newly generated through normal business operations, rather than seeking to acquire wholly new data that is consistent with the design parameters and goals of formal experimental efforts.  Much of the sensemaking activity around data is explicitly exploratory (what I call the ‘panning for gold’ stage of evolution – more on this in subsequent postings), rather than systematic in the manipulation of known variables.  These exploratory techniques are used to address relatively new fields such as the Internet of Things, wearables, and large-scale social graphs and collective activity domains such as instrumented environments and the quantified self.  These new domains of application are not mature in analytical terms; analysts are still working to identify the most effective techniques for yielding insights from data within their bounds.

The second relevant perspective is our understanding of discovery as an activity that is distinct and recognizable in comparison to generalized analysis: from this, we can summarize as sensemaking intended to arrive at novel insights, through exploration and analysis of diverse and dynamic data in an iterative and evolving fashion.

Looking deeper, one specific characteristic of discovery as an activity is the absence of formally articulated statements of belief and expected outcomes at the beginning of most discovery efforts.  Another is the iterative nature of discovery efforts, which can change course in non-linear ways and even ‘backtrack’ on the way to arriving at insights: both the data and the techniques used to analyze data change during discovery efforts.  Formally defined experiments are much more clearly determined from the beginning, and their definition is less open to change during their course. A program of related experiments conducted over time may show iterative adaptation of goals, data and methods, but the individual experiments themselves are not malleable and dynamic in the fashion of discovery.  Discovery’s emphasis on novel insight as preferred outcome is another important characteristic; by contrast, formal experiments are repeatable and verifiable by definition, and the degree of repeatability is a criteria of well-designed experiments.  Discovery efforts often involve an intuitive shift in perspective that is recountable and retraceable in retrospect, but cannot be anticipated.

Building on these two foundations, we can define Empirical Discovery as a hybrid, purposeful, applied, augmented, iterative and serendipitous method for realizing novel insights for business, through analysis of large and diverse data sets.

Let’s look at these facets in more detail.

Empirical discovery primarily addresses the practical goals and audiences of business (or industry), rather than scientific, academic, or theoretical objectives.  This is tremendously important, since  the practical context impacts every aspect of Empirical Discovery.

‘Large and diverse data sets’ reflects the fact that Data Science practitioners engage with Big Data as we currently understand it; situations in which the confluence of data types and volumes exceeds the capabilities of business analytics to practically realize insights in terms of tools, infrastructure, practices, etc.

Empirical discovery uses a rapidly evolving hybridized toolkit, blending a wide range of general and advanced statistical techniques with sophisticated exploratory and analytical methods from a wide variety of sources that includes data mining, natural language processing, machine learning, neural networks, bayesian analysis, and emerging techniques such as topological data analysis and deep learning.

What’s most notable about this hybrid toolkit is that Empirical Discovery does not originate novel analysis techniques, it borrows tools from established disciplines such information retrieval, artificial intelligence, computer science, and the social sciences.  Many of the more specialized or apparently exotic techniques data science and empirical discovery rely on, such as support vector machines, deep learning, or measuring mutual information in data sets, have established histories of usage in academic or other industry settings, and have reached reasonable levels of maturity.  Empirical discovery’s hybrid toolkit is  transposed from one domain of application to another, rather than invented.

Empirical Discovery is an applied method in the same way Data Science is an applied discipline: it originates in and is adapted to business contexts, it focuses on arriving at useful insights to inform business activities, and it is not used to conduct basic research.  At this early stage of development, Empirical Discovery has no independent and articulated theoretical basis and does not (yet) advance a distinct body of knowledge based on theory or practice. All viable disciplines have a body of knowledge, whether formal or informal, and applied disciplines have only their cumulative body of knowledge to distinguish them, so I expect this to change.

Empirical discovery is not only applied, but explicitly purposeful in that it is always set in motion and directed by an agenda from a larger context, typically the specific business goals of the organization acting as a prime mover and funding data science positions and tools.  Data Science practitioners effect Empirical Discovery by making it happen on a daily basis – but wherever there is empirical discovery activity, there is sure to be intentionality from a business view.  For example, even in organizations with a formal hack time policy, our research suggests there is little or no completely undirected or self-directed empirical discovery activity, whether conducted by formally recognized Data Science practitioners, business analysts, or others.

One very important implication of the situational purposefulness of Empirical Discovery is that there is no direct imperative for generating a body of cumulative knowledge through original research: the insights that result from Empirical Discovery efforts are judged by their practical utility in an immediate context.  There is also no explicit scientific burden of proof or verifiability associated with Empirical Discovery within it’s primary context of application.  Many practitioners encourage some aspects of verifiability, for example, by annotating the various sources of data used for their efforts and the transformations involved in wrangling data on the road to insights or data products, but this is not a requirement of the method.  Another implication is that empirical discovery does not adhere to any explicit moral, ethical, or value-based missions that transcend working context.  While Data Scientists often interpret their role as transformative, this is in reference to business.  Data Science is not medicine, for example, with a Hippocratic oath.

Empirical Discovery is an augmented method in that it depends on computing and machine resources to increase human analytical capabilities: It is simply impractical for people to manually undertake many of the analytical techniques common to Data Science.  An important point to remember about augmented methods is that they are not automated; people remain necessary, and it is the combination of human and machine that is effective at yielding insights.  In the problem domain of discovery, the patterns of sensemaking activity leading to insight are intuitive, non-linear, and associative; activites with these characteristics are not fully automatable with current technology. And while many analytical techniques can be usefully automated within boundaries, these tasks typically make up just a portion of an complete discovery effort.  For example, using latent class analysis to explore a machine-sampled subset of a larger data corpus is task-specific automation complementing human perspective at particular points of the Empirical Discovery workflow.  This dependence on machine augmented analytical capability is recent within the history of analytical methods.  In most of the modern era — roughly the later 17th, 18th, 19th and early 20th centuries — the data employed in discovery efforts was manageable ‘by hand’, even when using the newest mathematical and analytical methods emerging at the time.  This remained true until the effective commercialization of machine computing ended the need for human computers as a recognized role in the middle of the 20th century.

The reality of most analytical efforts — even those with good initial definition — is that insights often emerge in response to and in tandem with changing and evolving questions which were not identified, or perhaps not even understood, at the outset.  During discovery efforts, analytical goals and techniques, as well as the data under consideration, often shift in unpredictable ways, making the path to insight dynamic and non-linear.  Further, the sources of and inspirations for insight are  difficult or impossible to identify both at the time and in retrospect. Empirical discovery addresses the complex and opaque nature of discovery with iteration and adaptation, which combine  to set the stage for serendipity.

With this initial definition of Empirical Discovery in hand, the natural question is what this means for Data Science and business analytics?  Three thigns stand out for me.  First, I think one of the central roles played by Data Science is in pioneering the application of existing analytical methods from specialized domains to serve general business goals and perspectives, seeking effective ways to work with the new types (graph, sensor, social, etc.) and tremendous volumes (yotta, yotta, yotta…) of business data at hand in the Big Data moment and realize insights

Second, following from this, Empirical Discovery is methodological a framework within and through which a great variety of analytical techniques at differing levels of maturity and from other disciplines are vetted for business analytical utility in iterative fashion by Data Science practitioners.

And third, it seems this vetting function is deliberately part of the makeup of empirical discovery, which I consider a very clever way to create a feedback loop that enhances Data Science practice by using Empirical Discovery as a discovery tool for refining its own methods.

Related posts:

Comment » | Big Data, Enterprise, Language of Discovery

Big Data is a Condition (Or, “It’s (Mostly) In Your Head”)

March 10th, 2014 — 12:00am

Unsurprisingly, definitions of Big Data run the gamut from the turgid to the flip, making room to include the trite, the breathless, and the simply un-inspiring in the big circle around the campfire. Some of these definitions are useful in part, but none of them captures the essence of the matter. Most are mistakes in kind, trying to ground and capture Big Data as a ‘thing’ of some sort that is measurable in objective terms. Anytime you encounter a number, this is the school of thought.

Some approach Big Data as a state of being, most often a simple operational state of insufficiency of some kind; typically resources like analysts, compute power or storage for handling data effectively; occasionally something less quantifiable like clarity of purpose and criteria for management. Anytime you encounter phrasing that relies on the reader to interpret and define the particulars of the insufficiency, this is the school of thought.

I see Big Data as a self-defined (perhaps diagnosed is more accurate) condition, but one that is based on idiosyncratic interpretation of current and possible future situations in which understanding of, planning for, and activity around data are central.

Here’s my working definition: Big Data is the condition in which very high actual or expected difficulty in working successfully with data combines with very high anticipated but unknown value and benefit, leading to the a-priori assumption that currently available information management and analytical capabilties are broadly insufficient, making new and previously unknown capabilities seemingly necessary.

Related posts:

Comment » | Big Data, Enterprise, Language of Discovery

Fall Speaking: Janus Boye Conference, EuroIA, BlogTalk

August 25th, 2009 — 12:00am

A quick rundown on my fall speaking schedule so far.

waffles_logoFirst up is BlogTalk 2009, in Jeju, Korea on September 15 and 16. There I’ll be talking about ‘The Architecture of Fun’ – sharing a new design language for emotion that’s been in use in the game design industry for quite a while.  [Disclosure: While it’s a privilege to be on the program with so many innovative and insightful social media figures, I’m also really looking forward to the food in Korea :) ]

Next up is EuroIA in Copenhagen, September 26 and 27.  For the latest edition of this largest gathering of the user experience community in Europe, I’ll reprise my Architecture of Fun talk.

euro_ia_2009_logo

Wrapping up the schedule so far is the Janus Boye conference in Aarhus, November 3 – 6.  Here  I’m presenting a half-day tutorial titled Designing Information Experiences.  This is an extensive, detailed tutorial that anyone working in information management will benefit from, as it combines two of my passions; designing for people, and using frameworks to enhance solution scope and effectiveness.

jboye_com_aarhus09

Here’s the description from the official program:

When designing for information retrieval experiences, the customer must always be right. This tutorial will give you the tools to uncover user needs and design the context for delivering information, whether that be through search, taxonomies or something entirely different.

What you will learn:
•    A broadly applicable method for understanding user needs in diverse information access contexts
•    A collection of information retrieval patterns relevant to multiple settings such as enterprise search and information access, service design, and product and platform management

We will also discuss the impact of organizational and cultural factors on design decisions and why it is essential, that you frame business and technology challenges in the right way.

The tutorial builds on lessons learned from a large customer project focusing on transforming user experience. The scope of this program included ~25 separate web-delivered products, a large document repository, integrated customer service and support processes, content management, taxonomy and ontology creation, and search and information retrieval solutions. Joe will share the innovate methods and surprising insight that emerged in the process.

Janus Boye gathers leading local and international practitioners, and is a new event for me, so I’m very much looking forward to it.

I hope to see some of you at one or more of these gatherings that altogether span half the world!

Comment » | Dashboards & Portals, Enterprise, Information Architecture, User Experience (UX), User Research

“Enhancing Dashboard Value and User Experience” Live at Boxes and Arrows

March 5th, 2008 — 12:00am

Boxes and Arrows just published Enhancing Dashboard Value and User Experience, part 5 of the building blocks series that’s been running since last year. This installment covers how to include high-value social and conversational capabilities into portal experiences built on top of architectures managed with the building blocks. Enhancing Dashboard Value and User Experience also provides an explicit user experience vision for portals, metadata and user interface recommendations, and as tips on making portals easier to use and manage / administrate.
lamantia.dashboardspart6.dropcap.s11.jpg
Thanks again to all the good people who volunteer their time to make Boxes and Arrows such a high quality publication!

Comment » | Building Blocks, Dashboards & Portals, Enterprise, Information Architecture, User Experience (UX)

Moving Beyond Reactive IT Strategy With User Experience

May 9th, 2007 — 12:00am

For those in the enterprise IA / UX space, The next frontier in IT strategy: A McKinsey Survey centered on the idea that “…IT strategy is maturing from a reactive to a proactive stance”is worth a look.

This nicely parallels a point made about the reactive mindset common to IT in many large organizations, in discussion on the IAI mailing list last month. Lou Rosenfeld’s post Information architects on communicating to IT managers, summarizes the original discussion in the IAI thread, and is worth reading as a companion piece.

Lou’s summary of information architecture and user experience voices in the enterprise arena is noteworthy for including many examples of strong correspondence between McKinsey’s understanding of how IT strategy will mature (a traditional management consulting view), and the collected IA / UX viewpoints on addressing IT leadership – typical buyers for enterprise anything – and innovation.

Dialogs that show convergence of understanding like this serve as positive signs for the future. At present, a large set of deeply rooted cultural assumptions (at their best inaccurate, usually reductive, sometimes even damaging) about the roles of IT, business, and design combine with the historical legacies of corporate structures to needlessly limit what’s possible for User Experience and IA in the enterprise landscape. In practical terms, I’m thinking of those limitations as barriers to the strategy table; constraining who can talk to who, and about which important topics, such as how to spend money, and where the business should go.
Considering the gulf that separated UX and IT viewpoints ten – or even five – years ago, this kind of emerging common understanding is a good sign that the cultural obstacles to a holistic view of the modern enterprise are waning. We know that a holistic view will rely on deep understanding of the user experience aspects of business at all levels to support innovation in products and services. I’m hoping the rest of the players come to understand this soon.

Another good sign is that CIO’s have won a seat at the strategy table, after consistent effort:

Further evidence of IT’s collaborative role in shaping business strategy is the fact that so many CIOs now have a seat at the table with senior management. They report to the CEO in 44 percent of all cases; an additional 42 percent report to either the chief operating officer or the chief financial officer.

Looking ahead, information architecture and user experience viewpoints and practitioners should work toward a similar growth path. We fill a critical and missing strategic role that other traditional viewpoints are not as well positioned to supply.

Quoting McKinsey again:

IT strategy in most companies has not yet reached its full potential, which in our experience involves exploiting innovation to drive constant improvement in the operations of a business and to give it a real advantage over competitors with new products and capabilities. Fewer than two-thirds of the survey respondents say that technological innovation shapes their strategy. Only 43 percent say they are either very or extremely effective at identifying areas where IT can add the most value.

User Experience can and should have a leading voice in setting the agenda for innovation, and shaping understandings of where IT and other groups can add the most value in the enterprise. To this end, I’ll quote Peter Merholz (with apologies for not asking in advance):

“…we’ve reached a point where we’ve maximized efficiency until we can’t maximize no more, and that in order to realize new top-line value, we need to innovate… And right now, innovations are coming from engaging with the experiences people want to have and satisfying *that*.”

McKinsey isn’t making the connection between strategic user experience perspectives and innovation – at least not yet. That’s most likely a consequence of the fact that management consulting firms base their own ways of thinking, organizational models, and product offerings (services, intellectual property, etc.) on addressing buyers who are themselves deeply entrenched in traditional corporate structures and worldviews. And in those worlds, everything is far from miscellaneous, as a glance at the category options available demonstrates; your menu here includes Corporate Finance, Information Technology, Marketing, Operations, Strategy…

BTW: if you weren’t convinced already, this should demonstrate the value of the $40 IAI annual membership fee, or of simply reading Bloug, which is free, over paying for subscriptions to management journals :)

Comment » | Customer Experiences, Enterprise, Information Architecture, User Experience (UX)

Endeca Guided Navigation vs. Facets In Search Experiences

February 26th, 2007 — 12:00am

A recent question on the mailing list for the Taxonomy Community of Practice asked about search vendors whose products handle faceted navigation, and mentioned Endeca. Because vendor marketing distorts the meaning of accepted terms too often, it’s worth pointing out that Endeca’s tools differ from faceted navigation and organization systems in a number of key ways. These differences should affect strategy and purchase decisions on the best approach to providing high quality search experiences for users.

The Endeca model is based on Guided Navigation, a product concept that blends elements of user experience, administration, functionality, and possible information structures. In practice, guided navigation feels similar to facets, in that sets of results are narrowed or filtered by successive choices from available attributes (Endeca calls them dimensions).

But at heart, Endeca’s approach is different in key ways.

  • Facets are orthogonal, whereas Endeca’s dimensions can overlap.
  • Facets are ubiquitous, so always apply, whereas Endeca’s dimensions can be conditional, sometimes applying and sometimes not.
  • Facets reflect a fundamental characteristic or aspect of the pool of items. Endeca’s Dimensions may reflect some aspect of the pool of items (primary properties), they may be inferred (secondary properties), they may be outside criteria, etc.
  • The values possible for a individual facet are flat and equivalent. Endeca’s dimensions can contain various kinds of structures (unless I’m mistaken), and may not be equivalent.

In terms of application to various kinds of business needs and user experiences, facets can offer great power and utility for quickly identifying and manipulating large numbers of similar or symmetrical items, typically in narrower domains. Endeca’s guided navigation is well suited to broader domains (though there is still a single root at the base of the tree), with fuzzier structures than facets.

Operatively, facets often don’t serve well as a unifying solution to the need for providing structure and access to heterogeneous collections, and can encounter scaling difficulties when used for homogenous collections. Faceted experiences can offer genuine bidirectional navigation for users, meaning they work equally well for navigation paths that expand item sets from a single item to larger collections of similar items, because of the symmetry built in to faceted systems.

Guided navigation is better able to handle heterogeneous collections, but is not as precise for identification, does not reflect structure, and requires attention to correctly define (in ways not confusing / conflicting) and manage over time. Endeca’s dimensions do not offer bidirectional navigation by default (because of their structural differences – it is possible to create user experiences that support bidirectional navigation using Endeca).

In sum, these differences should help explain the popularity of Endeca in ecommerce contexts, where every architectural incentive (even those that may not align with user goals) to increasing the total value of customer purchases is significant, and the relevance of facets to searching and information retrieval experiences that support a broader set of user goals within narrower information domains.

Comment » | Enterprise, Information Architecture, User Experience (UX)

Smart Scoping For Content Management: Use The Content Scope Cycle

February 19th, 2007 — 12:00am

Con­tent man­age­ment efforts are justly infa­mous for exceed­ing bud­gets and time­lines, despite mak­ing con­sid­er­able accom­plish­ments. Exag­ger­ated expec­ta­tions for tool capa­bil­i­ties (ven­dors promise a world of automagic sim­plic­ity, but don’t believe the hype) and the poten­tial value of cost and effi­ciency improve­ments from man­ag­ing con­tent cre­ation and dis­tri­b­u­tion play a sub­stan­tial part in this. But unre­al­is­tic esti­mates of the scope of the con­tent to be man­aged make a more impor­tant con­tri­bu­tion to most cost and time over­runs.

Scope in this sense is a com­bi­na­tion of the quan­tity and the qual­ity of con­tent; smaller amounts of very com­plex con­tent sub­stan­tially increase the over­all scope of needs a CM solu­tion must man­age effec­tively. By anal­ogy, imag­ine build­ing an assem­bly line for toy cars, then decid­ing it has to han­dle the assem­bly of just a few full size auto­mo­biles at the same time.

Early and inac­cu­rate esti­mates of con­tent scope have a cas­cad­ing effect, decreas­ing the accu­racy of bud­gets, time­lines, and resource fore­casts for all the activ­i­ties that fol­low.

In a typ­i­cal con­tent man­age­ment engage­ment, the activ­i­ties affected include:

  • tak­ing a con­tent inventory
  • defin­ing con­tent models
  • choos­ing a new con­tent man­age­ment system
  • design­ing con­tent struc­tures, work­flows, and metadata
  • migrat­ing con­tent from one sys­tem to another
  • refresh­ing and updat­ing content
  • estab­lish­ing sound gov­er­nance mechanisms

The Root of the Prob­lem
Two mis­con­cep­tions — and two com­mon but unhealthy prac­tices, dis­cussed below — drive most con­tent scope esti­mates. First: the scope of con­tent is know­able in advance. Sec­ond, and more mis­lead­ing, scope remains fixed once defined. Nei­ther of these assump­tions is valid: iden­ti­fy­ing the scope of con­tent with accu­racy is unlikely with­out a com­pre­hen­sive audit, and con­tent scope (ini­tial, revised, actual) changes con­sid­er­ably over the course of the CM effort.

Together, these assump­tions make it very dif­fi­cult for pro­gram direc­tors, project man­agers, and busi­ness spon­sors to set accu­rate and detailed bud­get and time­line expec­ta­tions. The uncer­tain or shift­ing scope of most CM efforts con­flicts directly with busi­ness imper­a­tives to care­fully man­age of IT cap­i­tal invest­ment and spend­ing, a neces­sity in most fund­ing processes, and espe­cially at the enter­prise level. Instead of esti­mat­ing spe­cific num­bers long in advance of real­ity (as with the Iraq war bud­get), a bet­ter approach is to embrace flu­id­ity, and plan to refine scope esti­mates at punc­tu­ated inter­vals, accord­ing to the nat­ural cycle of con­tent scope change.

Under­stand­ing the Con­tent Scope Cycle
Con­tent scope changes accord­ing to a pre­dictable cycle that is largely inde­pen­dent of the specifics of a project, sys­tem, orga­ni­za­tional set­ting, and scale. This cycle seems con­sis­tent at the level of local CM efforts for a sin­gle busi­ness unit or iso­lated process, and at the level of enter­prise scale con­tent man­age­ment efforts. Under­stand­ing the cycle makes it pos­si­ble to pre­pare for shifts in a qual­i­ta­tive sense, account­ing for the kind of vari­a­tion to expect while plan­ning and set­ting expec­ta­tions with stake­hold­ers, solu­tion users, spon­sors, and con­sumers of the man­aged con­tent.

The Con­tent Scope Cycle
cm_scope_cycle.png

The high peak and ele­vated moun­tain val­ley shape in this illus­tra­tion tell the story of scope changes through the course of most con­tent man­age­ment efforts. From the ini­tial inac­cu­rate esti­mate, scope climbs con­sis­tently and steeply dur­ing the dis­cov­ery phase, peak­ing in poten­tial after all dis­cov­ery activ­i­ties con­clude. Scope then declines quickly, but not to the orig­i­nal level, as assess­ments cull unneeded con­tent. Scope lev­els out dur­ing sys­tem / solu­tion / infra­struc­ture cre­ation, and climbs mod­estly dur­ing revi­sion and replace­ment activ­i­ties. At this point, the actual scope is known. Mea­sured increases dri­ven by the incor­po­ra­tion of sup­ple­men­tal mate­r­ial then increase scope in stages.

Local and Enter­prise Cycles

Apply­ing the context-independent view of the cycle to a local level reveals a close match with the activ­i­ties and mile­stones for a con­tent man­age­ment effort for a small body of con­tent, a sin­gle busi­ness unit of a larger orga­ni­za­tion, or a self-contained busi­ness process.

Local Con­tent Man­age­ment Scope Cycle
cm_scope_local.png
At the enter­prise level, the cycle is the same. This illus­tra­tion shows activ­i­ties and mile­stones for a con­tent man­age­ment effort for a large and diverse body of con­tent, mul­ti­ple busi­ness units of a larger orga­ni­za­tion, or mul­ti­ple and inter­con­nected busi­ness process.

Enter­prise Con­tent Man­age­ment Scope Cycle
cm_enterprise_cycle.png

Scope Cycle Changes
cm_scope_changes.png

This graph shows the amount of scope change at each mile­stone, ver­sus its pre­de­ces­sor. Look­ing at the changes for any pat­terns of clus­ter­ing and fre­quency, it’s easy to see the cycle breaks down into three major phases: an ini­tial period of dynamic insta­bil­ity, a sta­tic and sta­ble phase, and a con­clud­ing (and ongo­ing, if the effort is suc­cess­ful) phase of dynamic sta­bil­ity.

Scope Cycle Phases
cm_scope_phases.png

Where does the extra scope come from? In other words, what’s the source of the unex­pected quan­tity and com­plex­ity of con­tent behind the spikes and drops in expected scope in the first two phases? And why dri­ves the shifts from one phase to another?

Bad CM Habits

Two com­mon approaches account for a major­ity of the dra­matic shifts in con­tent scope. Most sig­nif­i­cantly, those peo­ple with imme­di­ate knowl­edge of the con­tent quan­tity and com­plex­ity rarely have direct voice in set­ting the scope and time­line expec­ta­tions.

Too often, stake hold­ers with exper­tise in other areas (IT, enter­prise archi­tec­ture, appli­ca­tion devel­op­ment) frame the prob­lem and the solu­tion far in advance. The con­tent cre­ators, pub­lish­ers, dis­trib­u­tors, and con­sumers are not involved early enough.
Sec­ondly, those who frame the prob­lem make assump­tions about quan­tity and com­plex­ity that trend low. (This is in com­pan­ion to the exag­ger­a­tion of tool capa­bil­i­ties.) Each new busi­ness unit, con­tent owner, and sys­tem administrator’s items included in the effort will increase the scope of the con­tent in quan­tity, com­plex­ity, or both. Ongo­ing iden­ti­fi­ca­tion of new or unknown types of con­tent, work flows, busi­ness rules, usage con­texts, stor­age modes, appli­ca­tions, for­mats, syn­di­ca­tion instances, sys­tems, and repos­i­to­ries will con­tinue to increase the scope until all rel­e­vant par­ties (cre­ators, con­sumers, admin­is­tra­tors, etc.) are engaged, and their needs and con­tent col­lec­tions fully under­stood.
The result is clear: a series of sub­stan­tial scope errors of both under and over-estimatio, in com­par­i­son to the actual scope, con­cen­trated in the first phase of the scope cycle.
Scope Errors
cm_scope_error.png

Smart Scop­ing
The scope cycle seems to be a fun­da­men­tal pat­tern; likely an emer­gent aspect of the envi­ron­ments and sys­tems under­ly­ing it, but that’s another dis­cus­sion entirely. Fail­ing to allow for the nat­ural changes in scope over the course of a con­tent man­age­ment effort ties your suc­cess to inac­cu­rate esti­mates, and this false expec­ta­tions.
Smart scop­ing means allow­ing for and antic­i­pat­ing the inher­ent mar­gins of error when set­ting expec­ta­tions and mak­ing esti­mates. The most straight­for­ward way to put this into prac­tice and account for the likely mar­gins of error is to adjust the tim­ing of a scope esti­mate to the nec­es­sary level of accu­racy.

Rel­a­tive Scope Esti­mate Accu­racy
cm_estimate_accuracy.png

Scop­ing and Bud­get­ing
Esti­ma­tion prac­tices that respond to the con­tent scope cycle can still sat­isfy busi­ness needs. At the enter­prise CM level, IT spend­ing plans and invest­ment frame­works (often part of enter­prise archi­tec­ture plan­ning processes) should allow for nat­ural cycles by defin­ing classes or kinds of esti­mates based on com­par­a­tive degree of accu­racy, and the estimator’s lee­way for meet­ing or exceed­ing implied com­mit­ments. Enter­prise frame­works will iden­tify when more or less accu­rate esti­mates are needed to move through fund­ing and approval gate­ways, based on each organization’s invest­ment prac­tices.

And at the local CM level, project plan­ning and resource fore­cast­ing meth­ods should allow for incre­men­tal allo­ca­tion of resources to meet task and activ­ity needs. Tak­ing a con­tent inven­tory is a sub­stan­tial labor on its own, for exam­ple. The same is true of migrat­ing a body of con­tent from one or more sources to a new CM solu­tion that incor­po­rates changed con­tent struc­tures such as work flows and infor­ma­tion archi­tec­tures. The archi­tec­tural, tech­ni­cal, and orga­ni­za­tional capa­bil­i­ties and staff needed for inven­to­ry­ing and migrat­ing con­tent can often be met by rely­ing on con­tent own­ers and stake hold­ers, or hir­ing con­trac­tors for short and medium-term assis­tance.

Par­al­lels To CM Spend­ing Pat­terns
The con­tent scope cycle strongly par­al­lels the spend­ing pat­terns dur­ing CMS imple­men­ta­tion James Robert­son iden­ti­fied in June of 2005. I think the scope cycle cor­re­lates with the spend­ing pat­tern James found, and it may even be a dri­ving fac­tor.
Scop­ing and Matu­rity

Unre­al­is­tic scope esti­ma­tion that does not take the con­tent scope cycle into account is typ­i­cal of orga­ni­za­tions under­tak­ing a first con­tent man­age­ment effort. It is also com­mon in orga­ni­za­tions with con­tent man­age­ment expe­ri­ence, but low lev­els of con­tent man­age­ment matu­rity.

Two (infor­mal) sur­veys of CMS prac­ti­tion­ers span­ning the past three years show the preva­lence of scop­ing prob­lems. In 2004, Vic­tor Lom­bardi reported: “Of all tasks in a con­tent man­age­ment project, the cre­ation, edit­ing, and migra­tion of con­tent are prob­a­bly the most fre­quently under­es­ti­mated on the project plan.” [in Man­ag­ing the Com­plex­ity of Con­tent Man­age­ment].

And two weeks ago, Rita War­ren of CMSWire shared the results of a recent sur­vey on chal­lenges in con­tent man­age­ment (Things That Go Bump In Your CMS).

The top 5 chal­lenges (most often ranked #1) were:

  1. Clar­i­fy­ing busi­ness goals
  2. Gain­ing and main­tain­ing exec­u­tive support
  3. Redesigning/optimizing busi­ness processes
  4. Gain­ing con­sen­sus among stakeholders
  5. Prop­erly scop­ing the project

…“Prop­erly scop­ing the project” was actu­ally the most pop­u­lar answer, show­ing up in the top 5 most often.

Accu­rate scop­ing is much eas­ier for orga­ni­za­tions with high lev­els of con­tent man­age­ment matu­rity. As the error mar­gins inher­ent in early and inac­cu­rate scope esti­mates demon­strate, there is con­sid­er­able ben­e­fit in cre­at­ing mech­a­nisms and tools for effec­tively under­stand­ing the quan­tity and qual­ity of con­tent requir­ing man­age­ment, as well as the larger busi­ness con­text, solu­tion gov­er­nance, and orga­ni­za­tional cul­ture concerns.

Comment » | Enterprise, Ideas, Information Architecture

Who Should Own How We Work? Collaboration, the New Enterprise Application

May 14th, 2006 — 12:00am

Col­lab­o­ra­tion is the lat­est ral­ly­ing cry of soft­ware ven­dors hop­ing to embed new gen­er­a­tions of enter­prise class tools and user expe­ri­ences into the fab­ric of the mod­ern work­place. Microsoft, IBM, and other firms expect that con­trol or lead­er­ship in the mar­ket for col­lab­o­ra­tion, whether by own­ing the archi­tec­ture, sys­tems, or other solu­tion com­po­nents, will be lucra­tive. A recent Rad­i­cati Group study (qual­ity uncon­firmed…) of the mar­ket size for enter­prise col­lab­o­ra­tion offered an esti­mate of $1.6 bil­lion now, grow­ing 10% annu­ally to $2.3 bil­lion in 2010.

Beyond the sub­stan­tial money to be made cre­at­ing, sell­ing, installing, and ser­vic­ing col­lab­o­ra­tion solu­tions lies the strate­gic advan­tage of mar­ket def­i­n­i­tion. The vendor(s) that own(s) the col­lab­o­ra­tion space expect(s) to become an inte­gral to the knowl­edge economy’s sup­port­ing envi­ron­ment in the same way that Ford and Gen­eral Motors became essen­tial to the sub­ur­ban­ized con­sumer archi­tec­tures of the post WWII era by serv­ing simul­ta­ne­ously as employ­ers, man­u­fac­tur­ers, cul­tural mar­keters, cap­i­tal reser­voirs, and auto­mo­bile sell­ers. Col­lab­o­ra­tion ven­dors know that achiev­ing any level of indis­pen­si­bil­ity will enhance their longevity by mak­ing them a neces­sity within the knowl­edge econ­omy.

It’s worth tak­ing a moment to call atten­tion to the impli­ca­tions: by defin­ing the user expe­ri­ences and tech­no­log­i­cal build­ing blocks brought together to real­ize col­lab­o­ra­tion in large enter­prises, these ven­dors will directly shape our basic con­cepts and under­stand­ing (our men­tal mod­els and cog­ni­tive frames) of col­lab­o­ra­tion. Once embed­ded, these archi­tec­tures, sys­tems, and busi­ness processes, and the social struc­tures and con­cep­tual mod­els cre­ated in response, will in large part define the (infor­ma­tion) work­ing envi­ron­ments of the future.And yes, this is exactly what these ven­dors aspire to achieve; the Microsoft Share­point Prod­ucts and Tech­nolo­gies Devel­op­ment Team blog, offers:

“Share­Point Prod­ucts and Tech­nolo­gies have become a key part of our strat­egy for deliv­er­ing a com­plete work­ing envi­ron­ment for infor­ma­tion work­ers, where they can col­lab­o­rate together, share infor­ma­tion with oth­ers, and find infor­ma­tion and peo­ple that can help them solve their busi­ness prob­lems.“
[From SHAREPOINT’S ROLE IN MICROSOFT’S COLLABORATION STRATEGY.]

And IBM’s mar­ket­ing is not pitched and deliv­ered in a man­ner as sweep­ing, but the impli­ca­tions are sim­i­lar, as in the overview IBM® Work­place™: Sim­ply a bet­ter way]:
“IBM Work­place™ Solu­tions are role-based frame­works to help cus­tomers apply IBM Work­place tech­nolo­gies faster and more pro­duc­tively… These solu­tions are designed to pro­vide ‘short-cuts’ for cre­at­ing a high per­for­mance role-based work envi­ron­ment, help­ing to accel­er­ate time-to-value.“

The Mod­els for com­mu­ni­ca­tion and rela­tion­ships built into our tools are very pow­er­ful, and often employed in other spheres of life. How many times have you started writ­ing a birth­day card for a friend, and found your­self instinc­tively com­pos­ing a set of bul­let points list­ing this person’s chief virtues, notable char­ac­ter traits, and the most impor­tant / amus­ing moments of your friend­ship. The creep­ing ubiq­uity of the rhetor­i­cal style of Pow­er­point (Tufte’s essay here) is just one exam­ple of the tremen­dous social impact of a habit­u­ated model of com­mu­nica­tive prac­tices that’s run amok.

What does the future hold, in terms of enter­prise ven­dor con­trol over every­day work­ing expe­ri­ences? I’ve writ­ten before on the idea that the days of the mono­lithic enter­prise sys­tems are num­bered, mak­ing the point along the way that these behe­moths are the result of a top-down, one-size-for-all approach. I think the same is true of the cur­rent approach to col­lab­o­ra­tion solu­tions and work­ing envi­ron­ments. And so I was happy to see Andrew McAfee of Har­vard Busi­ness School make sev­eral strong points about how enter­prise col­lab­o­ra­tion efforts will real­ize greater suc­cess by *reduc­ing* the amount of struc­ture imposed on their major ele­ments — roles, work­flows, arti­facts, and rela­tion­ships — in advance of actual use.

McAfee sees con­sid­er­able ben­e­fit in new approaches to enter­prise IT invest­ment and man­age­ment that reduce the top-down and imposed nature of enter­prise envi­ron­ments and solu­tions, in favor of emer­gent struc­tures cre­ated by the peo­ple who must work suc­cess­fully within them. McAfee advo­cates allow­ing staff to cre­ate the iden­ti­ties, struc­tures and pat­terns that will orga­nize and gov­ern their col­lab­o­ra­tion envi­ron­ments as nec­es­sary, in an emer­gent fash­ion, instead of fix­ing these aspects long before users begin to col­lab­o­rate.

McAfee says:
“When I look at a lot of cor­po­rate col­lab­o­ra­tion tech­nolo­gies after spend­ing time at Wikipedia, del.icio.us, Flickr, and Blog­ger I am struck by how reg­i­mented, inflex­i­ble, and lim­ited the cor­po­rate stuff seems, because it does some or all of the following:

  • Gives users iden­ti­ties before they start using the tech­nol­ogy. These iden­ti­ties assign them cer­tain roles, priv­i­leges, and access rights, and exclude them from oth­ers. These iden­ti­ties almost always also place them within the exist­ing orga­ni­za­tional struc­ture and for­mal cor­po­rate hierarchy.
  • Con­tains few truly blank pages. Instead, it has lots of templates–for meet­ings, for project track­ing, for doc­u­ments and reports, etc.
  • Has tons of explicit or implicit work­flow– seqences [sic] of tasks that must be exe­cuted in order.

How much of this struc­ture is nec­es­sary? How much is valu­able? Well, the clear suc­cess sto­ries of Web 2.0 demon­strate that for at least some types of com­mu­nity and col­lab­o­ra­tion, none of it is.“

The crit­i­cal ques­tion is then “what types of com­mu­nity and col­lab­o­ra­tion require which approaches to cre­at­ing struc­ture, and when?” As any­one who’s used a poorly or overly struc­tured col­lab­o­ra­tion (or other enter­prise) tool knows, the result­ing envi­ron­ment is often anal­o­gous to a feu­dal soci­ety designed and man­aged by crypto-technical over­lords; one in which most users feel as if they are serfs bound to the land for in per­pe­tu­ity in order to sup­port the leisure-time and war-making indul­gences of a small class of share­hold­ing nobil­ity.

Answer­ing these ques­tions with con­fi­dence based on expe­ri­ence will likely take time in the range of years, and require numer­ous failed exper­i­ments. There’s a larger con­text to take into account: the strug­gle of enter­prise soft­ware ven­dors to extend their reach and longevity by dom­i­nat­ing the lan­guage of col­lab­o­ra­tion and the range of offer­ings is one part of a much broader effort by soci­ety to under­stand dra­matic shifts in our ways of work­ing, and the social struc­tures that are both dri­ven by and shape these new ways of work­ing. And so there are sev­eral impor­tant ideas and ques­tions under­ly­ing McAfee’s assess­ment that social sys­tem design­ers should under­stand.

One of the most impor­tant is that the notion of “col­lab­o­ra­tion” is con­cep­tual short­hand for how you work, who you work with, and what you do. In other words, it’s a dis­til­la­tion of your pro­fes­sional iden­tity. Your role in a col­lab­o­ra­tion envi­ron­ment defines who you are within that envi­ron­ment.

More impor­tantly, from the per­spec­tive of growth and devel­op­ment, your sys­tem assigned role deter­mines who you can *become*. Knowl­edge work­ers are val­ued for their skills, expe­ri­ence, pro­fes­sional net­works, pub­lic rep­u­ta­tions, and many other fluid, con­text depen­dent attrib­utes. And so lock­ing down their iden­ti­ties in advance strips them of a sub­stan­tial pro­por­tion of their cur­rent value, and simul­ta­ne­ously reduces their abil­ity to adapt, inno­vate, and respond to envi­ron­men­tal changes by shift­ing their think­ing or prac­tices. In plain terms, deter­min­ing their iden­ti­ties in advance pre­cludes the cre­ation of future value.

Another impor­tant under­ly­ing idea is the impor­tance of prop­erly under­stand­ing the value and util­ity of dif­fer­ing approaches to sys­tem­ati­za­tion in dif­fer­ing con­texts. McAfee’s assess­ment of the unhealthy con­se­quences of impos­ing too much struc­ture in advance is use­ful for social sys­tem design­ers (such as infor­ma­tion archi­tects and knowl­edge man­agers), because it makes the out­comes of implicit design strate­gies and assump­tions clear and tan­gi­ble, in terms of the neg­a­tive effects on the even­tual users of the col­lab­o­ra­tion envi­ron­ment. For com­plex and evolv­ing group set­tings like the mod­ern enter­prise, cre­at­ing too much struc­ture in advance points to a mis­placed under­stand­ing of the value and role of design and archi­tec­ture.

Fun­da­men­tally, it indi­cates an over­es­ti­ma­tion of the value of the activ­ity of sys­tem­atiz­ing (design­ing) col­lab­o­ra­tion envi­ron­ments to high lev­els of detail, and with­out recog­ni­tion for evo­lu­tion­ary dynam­ics. The design or struc­ture of any col­lab­o­ra­tion envi­ron­ment — of any social sys­tem — is only valu­able for how well it encour­ages rela­tion­ships and activ­ity which advance the goals of the orga­ni­za­tion and it’s mem­bers. The value of a designer in the effort to cre­ate a col­lab­o­ra­tive com­mu­nity lies in the abil­ity to cre­ate designs that lead to effec­tive col­lab­o­ra­tion, not in the num­ber or speci­ficity of the designs they pro­duce, and espe­cially not in the arti­facts cre­ated dur­ing design — the tem­plates, work­flows, roles, and other McAfee men­tioned above. To sim­plify the dif­fer­ent views of what’s appro­pri­ate into two arti­fi­cially seg­mented camps, the [older] view that results in the pre­ma­ture cre­ation of too much struc­ture val­i­dates the design of things / arti­facts / sta­tic assem­blies, whereas the newer view valu­ing min­i­mal and emer­gent struc­tures acknowl­edges the greater effi­cacy of design­ing dynamic sys­tems / flows / frame­works.

The overly spe­cific and rigid design of many col­lab­o­ra­tion sys­tem com­po­nents com­ing from the older design view­point in fact says much about how large, com­plex enter­prises choose to inter­pret their own char­ac­ters, and cre­ate tools accord­ingly. Too often, a desire to achieve total­ity lies at the heart of this approach.

Of course, most total­i­ties only make sense — exhibit coher­ence — when viewed from within, and when using the lan­guage and con­cepts of the total­ity itself. The result is that attempts to achieve total­ity of design for many com­plex con­texts (like col­lab­o­ra­tion within enter­prises large or small) rep­re­sent a self-defeating approach. That the approach is self-defeating is gen­er­ally ignored, because the pur­suit of total­ity is a self-serving exer­cise in power val­i­da­tion, that ben­e­fits power hold­ers by con­sum­ing resources poten­tially used for other pur­poses, for exam­ple, to under­mine their power.

With the chimera of total­ity set in proper con­text, it’s pos­si­ble to see how col­lab­o­ra­tion envi­ron­ments — at least in their most poorly con­ceived man­i­fes­ta­tions — will resem­ble vir­tual retreads of Tay­lorism, wherein the real accom­plish­ment is to jus­tify the effort and expense involved in cre­at­ing the sys­tem by point­ing at an exces­sive quan­tity of pre­de­ter­mined struc­ture await­ing habi­ta­tion and use by dis­en­fran­chised staff.

At present, I see two diver­gent and com­pet­ing trends in the realm of enter­prise solu­tions and user expe­ri­ences. The first trend is toward homo­gene­ity of the work­ing envi­ron­ment with large amounts of struc­ture imposed in advance, exem­pli­fied by com­pre­hen­sive col­lab­o­ra­tion suites and archi­tec­tures such as MSOf­fice / Share­point, or IBM’s Work­place.

The sec­ond trend is toward het­ero­gene­ity in the struc­tures inform­ing the work­ing envi­ron­ment, vis­i­ble as vari­able pat­terns and locuses of col­lab­o­ra­tion estab­lished by fluid groups that rely on adhoc assort­ment of tools from dif­fer­ent sources (Base­Camp, GMail, social book­mark­ing ser­vices, RSS syn­di­ca­tion of social media struc­tures, com­mu­ni­ties of prac­tice, busi­ness ser­vices from ASP providers, open source appli­ca­tions, etc.).

But this itself is a short term view, when sit­u­a­tion within a longer term con­text is nec­es­sary. It is com­mon for sys­tems or envi­ron­ments of all sizes and com­plex­i­ties to oscil­late cycli­cally from greater to lesser degrees of struc­ture, along a con­tin­uüm rang­ing from homo­ge­neous to het­ero­ge­neous. In the short term view then, the quest for total­ity equates to homo­gene­ity, or even efforts at dom­i­na­tion. In the long term view, how­ever, the quest for total­ity could indi­cate an imma­ture ecosys­tem that is not diverse, but may become so in time.

Apply­ing two (poten­tial) lessons from ecol­ogy — the value of diver­sity as an enhancer of over­all resilience in sys­tems, and the ten­dency of mono­cul­tures to exhibit high fragility — to McAfee’s points on emer­gence, as well as the con­tin­uüm view of shift­ing degress of homo­gene­ity, should tell us that col­lab­o­ra­tion solu­tion design­ers would be wise to do three things:

The end result should be an enter­prise approach to col­lab­o­ra­tion that empha­sizes the design of infra­struc­ture for com­mu­ni­ties that cre­ate their own struc­tures. Big ven­dors be wary of this enlight­ened point of view, unless you’re will­ing to respond in kind.

Comment » | Architecture, Enterprise

The Continuing Death of Enterprise Software

January 6th, 2006 — 12:00am

Over at 37Signals, just before the new year started, David made the prediction that by the end of 2006, “Enterprise will follow legacy to become a common insult among software creators and users.”

I think this is already the case, unless the people you’re talking to earn their bread and butter by doing something related to enterprise software – but there’s interesting ground here that I’d like to explore for a bit. On the 37signals site there were some good comments to Dave’s posting – from developers, entrepreneurs, and quite a few other perspectives – but no one made the connection to Conway’s Law, from Melvin Conway’s “How Do Committees Invent?”, which I’ll quote here:
“…organizations which design systems… are constrained to produce designs which are copies of the communication structures of these organizations.”

A good example of Conway’s Law in action is PowerPoint. As Edward Tufte says in Metaphors For Presentations: Conway’s Law Meets PowerPoint,”The metaphor of PowerPoint is the software corporation itself.” [Aside: As a hard-working consultant who spends *waaaayyy* too much time creating presentations to use as discussion vehicles when instead a direct conversation between relevant parties is by far the best use of everyone’s time and money, I can’t say enough good things about Tufte’s campaign to remind the business world how to communicate clearly, by avoiding PowerPoint unless it’s appropriate…]

It’s no surprise then that ‘enterprise software’ as it is installed and configured in many large corporations is generally massive, anonymous, byzantine in structure and workings, indifferent or hostile to individual needs, offensively neutered in all aspects of it’s user experience, and often changed arbitrarily to align with a power calculus determined by a select few who operate at great remove from the majority of the people who use the environment on a daily basis. After all, that is the nature of communication in many large (and quite a few small and medium sized) corporations.

That enterprise software is bad – excruciatingly bad, if you’ve tried to enter expenses using a generic installation of PeopleSoft or Siebel – is hardly news. But it is interesting that David from 37Signals, Peter Merholz of Adaptive Path, Jared Spool of UIE, and many others who are less visible but still important in directing the evolution of the Internet, would all say in one form or another that they see enterprise software as on the outs.

It’s interesting because I think it highlights a shift in the realm in which the Web community sees itself as relevant. If there were ever a potential enterprise platform, it is the Web – the new Web, Web 2.0, whatever you want to call the emerging information environment that is global, ubiquitous, semantically integrated, socially informed and / or collaborative, architected to provide readily consumable services, etc. But aside from occasional bouts of megalomania, and potential success stories like Salesforce.com, the enterprise realm has been pro-forma outside the boundaries of the possible – until now…

Will enterprise software die? Not right away, and not totally. Remember, there’s A LOT of big iron happily humming away like WOPR in data centers all over the world that will run the enterprise apps we all know and detest for many years to come. More important, let’s keep in mind that enterprise software is really just one part (the installable and configurable software part) of what is easiest to describe as a way of doing things. It’s a reflection of a command and control, hierarchical viewpoint on how to achieve business goals through standardization. That way of doing things comes from a way of thinking. Which comes from a type of organization that will (of necessity?) be with us for a long time.

But the new stuff, the things that new school CIOs and CTOs will commit to, will likely be very different in origin, manner of working, user experience, fundamental assumptions, and capability. It will come from different kinds of organizations; leaner and more agile multi-disciplinary systems and environment design consortiums or aggregates, perhaps. This matches well with some of Jared Spool’s observations on the nature of organizations that create good designs, from his keynote address at UI 10 last fall.

Closing the circle, Conway confirms what these creators will look like; “Primarily, we have found a criterion for the structuring of design organizations: a design effort should be organized according to the need for communication.”

Comment » | Enterprise, Information Architecture, User Experience (UX)

Lotus Notes User Experience = Disease

September 22nd, 2005 — 12:00am

Lotus Notes has one of the most unpleas­ant and unwel­com­ing User Expe­ri­ences this side of a medium-security prison where the war­den has aspi­ra­tions towards inte­rior design and art instruc­tion. One of the most painful aspects of the Notes expe­ri­ence is the default set­tings for font size and color in the email win­dow. The default font size (for Macs) is on the order of 7 point type, and the default color for unread mes­sages is — iron­i­cally — red. The com­bi­na­tion yields a user expe­ri­ence that resem­bles a bad skin rash.

I call it “angry red microNotes” dis­ease, and it looks like this:

angry_red_micro_notes.png

Over­all, it has an unhealthy affect on one’s state of mind. The under­tones of hos­til­ity and resent­ment run­ning through­out are man­i­fold. And nat­u­rally, it is impos­si­ble to change the default font size and color for the email reader. This is fur­ther con­fir­ma­tion for my the­ory that Notes has yet to escape it’s roots as a thick client for series of uncon­nected data­bases.

After three weeks of suf­fer­ing from angry red microNotes, I real­ized I was lit­er­ally going blind from squint­ing at the tiny type, and went to Google for relief. I found niniX 1.7, a util­ity that allows Mac based Lotus Notes users the abil­ity to edit the binary for­mat Notes pref­er­ences file, and change the font size of the email client. I share it in the hopes that oth­ers may break the chains that blind them. This will only solve half the prob­lem — if some­one can fig­ure out how to change the default color for unread mes­sages to some­thing besides skin rash red, I will hap­pily share with the rest of the suf­fer­ing masses (and appar­ently there are on the order of 118 mil­lion of us out there).

But will it always be this (hor­ri­ble) way?

In Beyond Notes 7.0: IBM Lotus sketches ‘Han­nover’ user expe­ri­ence Peter Bochner of SearchDomino.com says this of the next Notes release, “Notes has often been crit­i­cized for its some­what staid user inter­face. Accord­ing to IBM’s Bis­conti, in cre­at­ing Han­nover, IBM paid atten­tion “to not just the user inter­face, but the user expe­ri­ence.“

Okay… So does that mean I’ll have my choice of dis­eases as themes for the user expe­ri­ence of my col­lab­o­ra­tion envi­ron­ment?
Accord­ing to Ken Bis­conti, IBM Lotus vice pres­i­dent of Work­place, por­tal and col­lab­o­ra­tion prod­ucts, “Through improve­ments such as con­tex­tual col­lab­o­ra­tion and sup­port for com­pos­ite apps, we’ve gone above and beyond sim­ple UI enhance­ment”.

I think sim­ple UI enhance­ment is exactly what Ken and his team should focus on for the next sev­eral years, since they have so much oppor­tu­nity for improvement.

Comment » | Enterprise, Tools, User Experience (UX)

Back to top