Skip to main content

The 3 Waves of Innovation in Information Management (and How Context is Everything)

“Omne trium perfectum” – Someone in ancient Rome

Roughly translated from Latin as “Everything that comes in threes is perfect” or “Every set of three is complete.”

Maybe it’s age creeping up on me — I turned 35 in May, May of 1996 — but the sheer volume and variety of information, apps, services, series on Netflix, stores from which my daughter can purchase things — they’re virtually impossible to keep up with.

It’s the same in business, maybe worse. Information is exploding, like an IT big bang that just gets faster and farther apart with time, documents, emails, data, chats, tweets, twits sending tweets. It’s too much. Businesses and individuals are drowning in it. We’ve probably all heard one of those fantastic stats about how fast information is growing. They typically go something like this: “More information was created in the last two minutes than all information since the beginning of time.” Seriously though, I think it was Eric Schmidt, former Google CEO, who once said in a presentation, “Every two days now, we create as much information as we did from the dawn of civilization up until 2003.” I am not sure where he came up with that or how one can be precise enough to determine the volume of everything created until 2003, but whatever it is, there’s inarguably a huge and accelerating, almost unfathomable, amount of information being created every day, and we’re all being forced to sink or swim.

So how do businesses and individuals keep up with it all. I don’t think you can. Unless you’re some kind of modern-day Rain Man, it’s a losing battle. Further, unfortunately, most of it is useless to us at any given time, and worse, it conceals and distracts us from the small subset of the information that we really need — the right information, the important stuff, what Steven Covey, author of The Seven Habits of Highly Effective People would call the Big Rocks.

In business and in life, the Big Rocks vary over time, by person or role, and by any number of other considerations. What’s important to someone now, may not be important to them tomorrow or next week, and may never be important to someone else, and vice versa.

Context is Everything

Context is critical to determining the information that is most important at a given time. How to treat a snakebite is usually not that important… until you get bitten by a snake. Then it’s the most important information you’ll ever need. It’s a big rock at that point in time.

Maybe a little less dramatic, if you’re a salesperson working on an important proposal on a deadline for a key customer, then existing agreements, correspondence, past proposals and so on associated with that organization are crucial to you at that moment, not later. Agreements and correspondence with other companies? Not so much — unless, of course, they are related in some way, possibly in the context of a particular project, case or matter.

Context helps establish meaning and relevance to the task at hand. Context also acts to filter out the massive amount of information that isn’t important. We all know — especially in today’s hyper-partisan political environment — how taking something out of context can totally change its meaning or intent. Context is everything.

Omne Trium Perfectum (Everything that Comes in Threes is Perfect)

So how does this all relate to the “rule of three” referenced earlier?

To make that connection, let’s take a step back and look at how the technology for managing information — which includes organizing, securing, collaborating on and processing documents and other information — has evolved over time. For the sake of discussion, let’s refer to this class of problems and solutions, and all manner of related use cases, as information management. Of course, it includes general document management: “Let’s just get organized and make it easier to find and manage stuff.” But it also includes contract management, invoice processing, records management and more.

We believe we’re entering the third of three major phases of information management technology, each increasingly more sophisticated and powerful, and each with new and greater potential to impact a business’s bottom line.

The First Wave: The Monolith

A single central repository for all information — the proliferation of data silos

The first wave could actually be said to have begun back in the 1970s when Xerox PARC came up with the desktop metaphor and mouse, with folders, the trashcan and other familiar PC features we see today. Then in the early- and mid-1980s, Apple introduced the Lisa which incorporated some of these elements, and then went mainstream with the Macintosh. These elements soon began to show up in the first versions of Microsoft Windows. In 1990, Documentum was founded, which one could consider the first enterprise document management software. This is what came to be known as Enterprise Content Management or ECM.

Before that, of course, there were hierarchical directories in VMS, UNIX, DOS and other early operating systems, and when networks started to become popular, these directories could be accessed by multiple people, forming the first incarnations of centralized repositories — basically early versions of shared network drives.

And way, way back before that, for many, many years, paper filing cabinets with all manner of organization schemes were the standard approach. By the way, it’s amazing how many manila folders and file cabinets still exist in mainstream, industry-leading companies today. In terms of storing and managing information, huge numbers of businesses and organizations are effectively driving around in the equivalent Model A Fords. Yes, many have some have later model cars, even Teslas, parked out front. Some are indeed quite modern and up to date, but in the vast majority, the parking is still full of Model As.

I’ve referred to this wave or phase as The Monolith as it was primarily characterized by a goal to move all documents and similar unstructured content into a single central storage location or repository. Everything would be in one place, it would be easy to find and control. Everything would be good. Unfortunately, it was also generally isolated and not integrated with other systems being used. The first silo was born.

There were inherent challenges with user adoption related to the fact that many of these systems were somewhat complicated and difficult to use or, at a minimum, they had a different interface to learn. But that aside, one of the main challenges in this phase was that it required everyone to store and access information the same way. The organization had to come up with a common structure that attempted to meet the needs of the entire organization. This was usually a huge effort in compromise, often ending up with something that didn’t work very well for anyone, like making decisions by committee. It was, however, purpose-built and generally focused on one or more specific use cases — maybe document management, contract management, or records management.

The structure was hierarchical and rigid, difficult to change and adapt, especially after a large amount of information was stored in the system. It was like trying to change the foundation of house that’s already been built. It can be done, but it’s complicated, expensive and likely going to break something in the process.

It was also highly subjective, meaning that the way the company chose to organize information was almost totally based on the effectiveness and preferences of those setting up the system. This resulted in differences not only across industries, but between businesses in the same industry, and even between different divisions of the same business. I mean, do you organize documents by customer and date, or by date and customer, or perhaps project and customer? And then how does someone else do it? You get the idea.

This subjectivity meant that people often got it wrong, or simply stored things in the system in different and inconsistent ways. Legal might want documents stored by date, whereas sales wants them stored by customer. So where do the documents go? And what if they need to be in more than one place — for instance, the project folder AND the customer folder? Maybe they need to be stored also by time, in a folder for the month of April or May.

The need for a new use case would always arise; maybe at first, they were just doing general document management and later they needed to improve specifically how they managed contracts or invoices. When the new use case became important, the subjective, rigid, one-size-fits-all structure usually didn’t really align with the new need. The inherent inflexibility made it difficult to adapt without requiring lots of IT resources and impacting groups that were already relying on it. This often led to the implementation of another new and disconnected system to solve that specific use case. The second silo was born, and then the third, fourth, fifth and beyond.

A new approach — metadata-driven and thus “what vs. where”

Also, at this time a new approach emerged that wasn’t focused on a fixed hierarchical folder structure with primitive metadata. (Yes. Metadata had been around since Documentum and before, but it was mainly just tags to search on.) This new approach relied on metadata to literally drive the system, it wasn’t about where information was stored, but about what the information was and what it was related to — it was a proposal related to this account, or an invoice related to this contact at that vendor.

This decoupled location from the classification scheme. It focused on describing the information, not on where it should be stored. One didn’t put a document in a folder or library, they just tagged it as a proposal or contract, to this or that customer, project, case or literally anything, and went about their business.

The metadata-driven approach addressed two major shortcomings of the older, rigid, hierarchical systems.

First, it was dynamic. Information could show up in more than one location. The same document could be found by sales through its relationship to the customer, or by the services team based on its relationship to a certain project, or by the legal team by expiration date. It could be found with different search terms by different teams, roles and even individuals — the way people prefer to work.

Second, it was objective. A contract is a contract and its relationship to the parties involved is essentially the same, not only within a given business, or across businesses in the same industry, but even across industries, whether in manufacturing, financial services, construction or energy. Maybe it’s called an agreement or a lease, but it’s a common, objective concept.

The simple objective nature of the approach also makes it more intuitive, leading to greater precision and accuracy, because, generally, people get it right and do it consistently. I mean, everyone knows what they are working on (hopefully), but knowing where to store it in a chaotic, ever-evolving folder hierarchy is an entirely different matter

The Second Wave: Unification

Breaking down and unifying disconnected silos with a repository-neutral approach — Information can be anywhere.

The second wave, which I’ll refer to as Unification, began to emerge over time, starting around the mid-2000s, with the realization that unstructured content, e.g., documents, images, and other content, were not disconnected and separate, they were part of a greater whole. A contract or proposal was important to further a deal with a prospect that was being managed in the CRM. To efficiently get the whole picture and make fully informed decisions (and actually get work done), all of this information was necessary. Of course, every CRM and ERP tried to address this by adding its own home-grown content management capabilities, but that resulted in limited capabilities and additional silos.

In addition, the proliferation of silos described in the first wave created a growing problem where people needed information that existed in multiple systems. They were constantly moving from one system to another — context-switching — trying to remember the nuances of the other systems’ interfaces, and often still not finding what they needed. The pressure to break down these data silos and provide easier access to information in a variety of systems steadily increased. Unifying information across the business started to rival, and sometimes outweigh, the importance of a single system or use case.

One could say a clear demarcation marking the end of the first wave and the beginning of second wave was in January 2017, when Gartner declared ECM was dead and renamed the segment Content Services Platforms (CSPs). Gartner describes a CSP as follows:

Content Services are a set of services and microservices, embodied either as an integrated product suite or as separate applications that share common APIs and repositories, to exploit diverse content types and to serve multiple constituencies and numerous use cases across an organization.

Characteristics of a CSP include the ability to access information from different sources, which is to say it’s repository- or system-neutral and integrates well with other systems and line-of-business applications. This has also been referred to as backend-neutral — opening the backend of the system to connect to other systems through a single common interface. The term agnostic was also used to describe this characteristic — for instance, repository-agnostic or system-agnostic.

Gartner goes on to specifically mention metadata layers in their description of a CSP. Basically, the ability to use metadata to relate information contextually across various systems and repositories. They also describe a CSP as utilizing artificial intelligence — either built-in or accessed via third-party intelligence services API — to help automatically determine the characteristics of content and other information. This brings up an interesting point about the metadata-driven approach: that it is well-suited to employ AI to automatically derive metadata and relationships that then drive the system. One could say that the metadata-driven approach is wired for AI.

In this phase, the “what vs. where” description takes on further meaning. Now it’s really not about where the information is. It could literally be anywhere. The information is unified based on context — for example, all the contracts associated with a particular customer/account, project, case or claim. The user doesn’t really have to care about where the information is anymore. It is surfaced based on its relationship to the context in which it is needed. Basically, information can remain in place without disturbing existing systems and processes, if and until the organization decides to migrate or transition users away from an older legacy system.

Migration, Change Management and Innovation

This idea of “in-place” information management addresses two really important stumbling blocks associated with information management systems, or really when implementing any new IT system: migration and change management.

First, such new system deployments almost always start with an expensive, time-consuming and disruptive data migration. With a repository-neutral approach, it is no longer required that the data be migrated to begin using the new solution. It can be migrated later if desired, and often more intelligently, in a phased manner. Often large amounts of information in a certain system are never accessed again, so why migrate that? What about only migrating the information that is being accessed, or that which is related to certain important information, and so on, all of which can be enhanced with AI and machine learning. Think of it as spinning up the new use case immediately, and then executing an intelligent, phased migration.

Second, it eases change management. Usually, not everyone is ready to move to a new system, often large bodies of users, or influential managers, can prevent a new solution from being deployed because they just don’t want to change. The old system is good enough. Well, now they don’t have to. Even small teams can implement a new use case or process that replies upon information in an existing system, while other groups continue to use the older system. The users that are more resistant to change can move to the new approach in the future, when the solution is more mature and when there are many others who are up to speed and can help them. Think of it as intelligent, phased change management.

Lastly, this change management element also enables innovation. Where does innovation occur in an organization? Well, certainly not everywhere at once. It happens in smaller groups, teams and departments. Since smaller groups can move ahead without requiring everyone to change, they can innovate more freely, and when their innovative new approaches show results, the rest of the organization can adopt them. In the past, new innovation could easily have stalled in the face of conflicting priorities, opinions and drawn out debates about switching from this system or that. Now even a small group can implement a new use case using and accessing information that remains in-place in other systems, while others who are not yet ready to move, or don’t have a need for the use case, can continue to rely on the older system indefinitely, until the company is ready to move them, or better, until they see the results of the new innovation and want to move!

The 360° view

Metadata and the associated relationships between business objects also form the basis of a 360° view of information, wherein relationships can be traversed in an intuitive manner. It’s almost as if relevant information finds the user instead of the other way around.

For example, one searches for a presentation that was given to a certain customer or account. Upon locating that presentation, one sees that it is related to the customer of interest. This customer can then be inspected to see that it is related to other documents, projects, cases, claims, etc. In the energy sector, it might be related to an oil or gas well; in the real estate sector, maybe it is a property or building, and on and on. So, then it becomes possible to expand the relationships with these other objects and see that they are related to other documents, projects and cases. And those also have relationships, so it becomes easy to discover important information that one may not have located with a traditional search.

The Third Wave: Integration

Seamless “in-context” access to information in the user interface of your choice

The third wave we’ll describe as Integration, and it follows logically from the second wave of unification. This is all about integrating elements from the first two waves into a single common user interface. But not the user interface of the information management system or CSP, as termed by Gartner, but the user interface of other core line-of-business applications. This has also been referred to as front end-neutral.

The idea is that purpose-built information management capabilities are integrated seamlessly into the user interfaces of other applications — like Office 365, Salesforce, G-Suite, SharePoint, Teams, NetSuite, SAP, QuickBooks, Workday or Esri ArcGIS, for example. In addition to the context established by metadata and relationships to important business objects like customers, projects and cases, a new layer of context is added to the picture, the context of the user interface of the line-of-business application.

Now directly from within an application like Salesforce, a salesperson could be working on a given customer or opportunity, and then, transparently to the user, information in any connected repository or system is presented in the context of that customer. Documents that are stored in SharePoint, Box, OpenText, even a traditional network file share, are accessible right there in the Salesforce interface. No need to switch to another UI. Salespeople can remain in the Salesforce UI longer — or office workers can remain in Office 365 — freely accessing information in a network file share, OpenText or Salesforce!

And it’s not only unstructured content. It’s other structured data as well. So right in the Salesforce interface, one can not only see the documents related to the customer of interest, they can see the projects or cases that are managed in a connected ERP or case management system.  Learn more about document management for Salesforce.

This idea aligns almost directly with the recently announced Salesforce Customer 360 initiative. Salesforce starts out describing the Customer 360 initiative as follows:

Every company wants to deliver connected customer experiences across channels and departments. These experiences need to span siloed organizations, processes and infrastructure across marketing, commerce, sales and service.

Salesforce is speaking about unifying the data in their various “clouds,” such as the sales cloud, service cloud, commerce cloud, in the context of the customer, leading directly and explicitly to the 360° view concept described above. This is the third wave, albeit from a slightly different perspective, but the parallel is direct.

Everything That Comes in Threes is Perfect

OK, I’ll wrap up this missive by harkening back to the Latin phrase I started with, “Omne trium perfectum.” While it may be overstated to say that these three waves, or phases, constitute perfection, there is a clear evolution to a better place, at least as it relates to information management.

The old days of the monolithic, centralized place where everything resides just isn’t workable. It never really was. Information is exploding and it’s everywhere. People want to work the way they want to work — not in some top-down, one size-fits-all world that is imposed upon them. Flexibility, adaptability, personalization, democratization, connectedness, intelligence — these are the hallmarks of the third wave.

Context is the key theme that weaves through it all, establishing meaning and relevance, enabling people to focus on what’s most important to the exclusion of the less important. When you quickly filter out all the less important information, everything that will distract and defocus you, when you minimize context-switching (that word again) from platform to platform, or application to application, and surface the most important information right when you need it… when you do those things, focus improves, work get done faster with higher quality, productivity increases and the big rocks get moved.