How would you like to be a Guest Blogger for KMI? Email us at: info@kminstitute.org and let us know your topic(s)!

Design Thinking and Taxonomy Design

May 2, 2018

In my experience I’ve found that any successful taxonomy design effort stems from a strong understanding of the end users’ needs – hardly a small task. One way that I’ve worked to address this challenge is by incorporating Design Thinking into our taxonomy design process.

IDEO defines Design Thinking as a human-centered approach to problem solving that brings together the needs of people, technology, and business to solve complex problems with innovative solutions. The process is broken into phases, which can all occur in parallel and be repeated iteratively. This blog outlines how we at EK integrate each phase during a taxonomy design.

Why Design Thinking? Here at EK, we’ve seen countless instances where taxonomy design efforts suffer from a lack of buy-in and alignment, resulting in stagnation because users aren’t adopting and using the taxonomy. This methodology addresses those issues because it provides opportunities to fully understand users and their needs, and make sure that you’re truly designing for them. Using this approach ensures that a taxonomy design is one that users support and one that combines findability with usability.

Phase 1: Empathize

To start, you need to achieve an in-depth understanding of the problem that needs to be solved and remove any assumptions you may possess. This involves empathizing with users by observing and interacting with them to understand their experiences and motives. We’ve found this is often lacking in taxonomy design initiatives, where project stakeholders aren’t aligned on goals, or do not clearly understand the “why” of a taxonomy.  

There are many approaches that you can take in order to accomplish this goal. At EK, we conduct interviews and focus groups, and facilitate taxonomy workshops. Interviews and focus groups can help you learn what your end users struggle with when it comes to finding and discovering information. Be conscious of who you’re interviewing and what types of questions you’re asking. Are you interviewing a range of users, representing different levels of experience and different areas of expertise? Are you asking leading questions based on what you assume the problem(s) to be?

Workshops in particular are incredibly valuable because they provide the opportunity to involve actual business users in the initial design phases, mitigating the risk of incorrectly presuming design requirements. While interviews and focus groups arguably offer the same benefits, workshop participants can additionally become your strongest advocates for a taxonomy design, as they are truly involved from the very beginning. In addition to interviews, focus groups, and workshops, consider collaboratively developing personas and empathy maps to identify user differentiators and key user needs. Together, these tools will help you draw key insights from your end users.

Phase 2: Define

The Define stage involves analyzing and synthesizing all of the previously gathered information to define the core problem(s) affecting your end users. In this phase, you’ll need to clearly define all of the users’ needs.

At EK, rather than focusing on creating a problem statement, we shift the focus to creating an outcome statement. In short, we’re asking the end users to answer the question, “What will this taxonomy allow end users to do/accomplish?” Asking this type of question allows us to easily capture the expectations and desires of the end users and make sure that we’re delivering a product that works for them. Just like creating an effective problem statement, creating this outcome statement simultaneously focuses your end users on the specific needs and creates a sense of possibility that allows team members to bounce off ideas in the Ideation stage.

Phase 3: Ideate

Armed with your user insights and clear problem/outcome statements, you can progress to the Ideate phase to identify alternatives to viewing the problem and subsequently, new solutions.

Here is where you may begin to move towards initial metadata field and value identification and prioritization, keeping in mind the aforementioned outcome statement. While it’s important to preface this phase with criteria regarding characteristics of successful business taxonomies, it’s also important to get a range of potential ideas and make sure everything is at least captured. The resulting set of metadata fields and corresponding values can give a high-level overview of the important content characteristics which may need to be reflected in the taxonomy.

Phases 4 and 5: Prototype and Test

The Prototype phase offers the opportunity to test your potential solutions through inexpensive, scaled-down versions of the product or specific features. The final Test phase involves rigorous testing of the complete product. Taxonomy on paper tends to be abstract. Our prototyping and testing approaches bring real business context to the taxonomy design effort for our end users.

The metadata fields that are identified in the Ideate phase can help form a “starter taxonomy” that will be further tested and elaborated in order to become a truly effective business taxonomy. One way we tackle this phase at EK is through card sorting, a technique to discover how end users categorize information, which in turn helps to validate portions of a taxonomy design. The exercise can also help identify which categories need adjustments based on user feedback.

By the end of the prototyping stage, the team will have a clearer idea of the limitations of the taxonomy, the problems that exist, and a better understanding of how real users would act, think, and feel when interacting with the end product. In taxonomy design, testing of the complete product is ongoing, with alterations and refinements being considered and made through taxonomy governance to better reflect the end users and their evolving needs.

Conclusion

Progress in your taxonomy design effort starts with a clear understanding of your end users. That’s why Design Thinking can be incredibly helpful when building a taxonomy that will meet the real needs of your organization and your end users. This iterative, flexible, and collaborative methodology allows you to quickly identify, build, and test your way to success.

Conversational Leadership: 3 Steps to Improve Conversations

April 16, 2018

"If you want to go fast, go alone; if you want to go far, go together."  -- African Proverb

Do conversations need to improve?

Organizations have a purpose to fulfill. There are many resources available to each organization so that they can fulfill their unique purpose. These resources can be tangible or intangible. Examples of tangible resources include buildings, computers, people and other resources that can be physically seen. Examples of intangible resources include patents, trademarks, goodwill and other resources that cannot be physically seen or touched. Over the past two decades, there has been a significant shift in how organizational value is created and maintained. The shift is from tangible value to intangible value. Organizations are now creating more value from intangible assets than tangible assets.  

The Wall Street Journal has been reporting on this trend. In an article titled “Accounting’s 21st Century Challenge: How to Value Intangible Assets”, author Vipal Monga and economist Carol Corrado depict the shift.  The chart shows how the majority of organizational value has shifted from tangible assets to intangible assets. Another data point is that companies within the S&P 500 are experiencing an increase in “market premium” as their intangible valuation grows. This trend is not only happening in the United States, it is happening in many countries around the world. It is also visible beyond the “for-profit” sector, we are seeing it in government, military, non-profit, not for profit and volunteer organizations.

One of the most overlooked intangible resources is “conversation”. The simple act of talking to each other may be overlooked and taken for granted. The art and science of conversation is often left to chance as well as a broad range of assumptions.  It is assumed that each person and each group are having the best possible conversations that they can have.

The multi-disciplinary field of Knowledge Management is beginning to study the art and science of conversation to see if there is room for improvement. The nomenclature being used for this initiative is “Conversational Leadership”.  The two core questions of this new practice have been defined as “are we having the conversation we need to be having right now” and “are we having it in the way we need to be having it.”

3 Steps To Improve Conversations

  1. Pure listening: Truly listen to other people’s spoken words (and non-verbals) while simultaneously and separately holding your perception of those words. Be aware of the time and space for your interpretations of other people’s words and phrases.
  2. Continuous awareness: As you’re speaking, raise your own awareness for multiple interpretations of the words you speak (as well as your non-verbals). Be prepared for a range of responses so that you can maintain the optimal flow of the conversation.
  3. Maintain curiosity: Skillfully “check-in” to see and hear how your words (and non-verbals) were perceived by other people.

Details of the 3 Steps

Step 1 - Pure listening: truly listen to other people’s spoken words and non-verbals while simultaneously and separately holding your perception of those words.

Steven Covey’s wise words of “listen to understand as opposed to respond” continue to be profound. It is quite common in most conversation to communicate back and forth through a general “gist”. If one person struggles to find words to convey their message, or if there’s slight confusion, you’ll often hear the phrase “you get the gist” from the speaker. Not only is it difficult to purely listen to someone else, it is also difficult to convey a pure message.

Each person has their own biases and filters. Biases and filters can be known by the speaker, and they can also be subconscious and unknown by the person speaking. “Confirmation Bias” is one example of a bias. In confirmation bias, an individual looks for evidence to confirm their judgement/opinions of another person. Similar to many biases, this may be done subconsciously without the conscious awareness of either person. Being conscious and aware of your own biases is a major portion of pure listening.

An example of a filter is the “Curse Word Filter.” This filter can be described as a person thinking of a curse word and then consciously or subconsciously choosing not to curse out loud in the moment. Dave Snowden is credited with the quote “we know more than we say, and we say more than we write.” The quote is often intended to separate “knowledge” from “information” and yet in this case it serves as another example of a filter. There are many biases and filters that an individual may have. Raising your own awareness and understanding of these biases and filters is essential to pure listening.

Step 2 - Continuous awareness: Raise your awareness and preparedness for multiple interpretations of the words you speak, as well as multiple interpretations of your non-verbals.  

The speed of conversation is relatively fast. It is easy to miss real-time reactions and responses. It is difficult to create time and space for an impactful conversation to reveal itself. The challenge is to simultaneously stay engaged in the conversation as well as maintain awareness of yourself and others.

A generic example of this awareness is to consider the unfortunate event of a car accident. If 5 people witnessed the accident, each of those 5 people will have a separate perception, recollection and specific words to describe the event. Another example is the graphic depicted here. One person is clearly looking at the number 6 while the other person is clearly looking at the number 9. Each person is accurate and yet in their conversation they may struggle to understand each other.

This is similar to what is happening in real-time during a conversation; each person is having an emergent perception of what is being said (and not said). Consider the range of thoughts, bodily sensations, feelings and emotions you have during a conversation. Notice how Step 2 Awareness pairs with Step 1 Listening, there are many levels of conversation that are occurring and unfolding at a rapid rate. Lightly hold an awareness of “what did I just say” or “what did I just think” or “what did I just feel” while balancing your awareness and engagement in the conversation itself.

Step 3 – “Maintain curiosity: Skillfully 'check-in' to hear how your words (and non-verbals) were perceived.”

It can be quite awkward when someone says “wait, that’s not what I meant” or “can I check my understanding of what you just said”. You’ll hear phrases such as “let me try to repeat that back [in my own words]” or “what I think you’re trying to say is...” The ability to skillfully ask for clarification and understanding is rare and challenging. Similar to how Step 1 Listening and Step 2 Awareness work together, Step 3 also simultaneously works in tandem.

In the field of Organization Development, there are concepts known as “group-task” and “group-maintenance”. Group-task is “the work to be done”. In other words, the content of group-task conversations is mostly related to what each person is working on, and their current standing for those tasks and progress. By contrast, group-maintenance is “how the group is working together (or not)”. The content of group-maintenance conversations is mostly related to how the group creates and shares its collective understanding. Most conversations are group-task related and very few conversations (or comments) are group-maintenance related.

There is possibly room for improvement in the effort balance to balance group-task and group-maintenance comments and conversations. Each person will have their own comfort level for the balance, and which words to use in each case.  Taking into consideration all three steps of this overall article will also support the balance, as well as the range and health of conversations.

The Positive and Progressive Result

Research has shown that people are feeling less heard and less understood. There is a common quote that “there is one fear that drives all others, that is the fear of losing or being out of control”. While conversations may not be out of control, they may often be headed in that direction. Conversations appear to be more and more concerned about activity and progress, as opposed to collectively created connection and understanding. These 3 steps provide an opportunity to increase your conversational skill which results in the “right conversations at the right time in the right way” (based upon the emergent needs of the individuals and the group). These skillful conversations continue to deliver the desired activity and progress as well as increase the human heart and soul connection. The positive and progressive  result will be an increase shared understanding and quality and a decrease in much of the turmoil facing our world today.

Next Steps

The research continues in this emergent field of Conversational Leadership. If you’d like to join the conversation and contribute to the research and analysis, please contact John Hovell at John.Hovell@STRATactical.com.

What is Knowledge Management and Why Is It Important?

April 3, 2018

As I’ve often asserted, Knowledge Management struggles with its own identity. There are any number of definitions of KM, many of which put too much stress on the tacit knowledge side of the knowledge and information management spectrum, are overly academic, or are simply too abstract. At Enterprise Knowledge, we’ve adopted a concise definition of knowledge management:

Knowledge Management involves the people, process, culture, and enabling technologies necessary to Capture, Manage, Share, and Find information.

The actions at the end of that sentence are the most critical component. All good KM should be associated with business outcomes, value to stakeholders, and return on investment. We discuss these actions as follows:

  • Capture entails all the forms in which knowledge and information (content) move from tacit to explicit, unstructured to structured, and decentralized to centralized. This ranges from an expert’s ability to easily share their learned experience, to a content owner’s ability to upload a document they’ve created or edited.
  • Manage involves the sustainability and maturation of content, ensuring content becomes better over time instead of becoming bloated, outdated, or obsolete. This is about the content itself, its format, style, and architecture. Management also covers the appropriate controls and workflows necessary to protect it, and the people who may access it.
  • Share includes both an individual’s and organization’s ability and capacity to collaborate and pass knowledge and information via a variety of means, ranging from one-to-one to one-to-all, synchronous to asynchronous, and direct to remote.
  • Find covers the capabilities for the knowledge and information to be easily and naturally surfaced. The concept of findability goes well beyond traditional “search,” and includes the ability to traverse content to discover additional content (discoverability), connect with experts, and receive recommendations and “pushes.”

We’ve taken this simple definition as the foundation for what we call our KM Action Wheel. The Action Wheel expresses the type of actions we seek to encourage and enable for the organizations and individuals with whom we work. It adds a bit of additional specificity to the aforementioned:

  • Create recognizes that a key element of good KM is not simply the capture of existing knowledge, but the creation of new knowledge. This can take a number of forms, from allowing knowledge creation by an individual via innovation forums or social reporting, to group knowledge creation via better and improved collaboration and collaboration systems.
  • Enhance focuses on the fact that effective KM will lead not just to the creation and capture of knowledge, but the sustainable improvement of that knowledge. In short, this means creation and stewardship of the leadership, processes, and technologies to make information “better” over time rather than having it fall into disrepair. Content’s natural state is entropy, and good KM will counteract that. Enhancement also covers the application of metadata, comments, or linkages to other information in order to improve the complete web of knowledge.
  • Connect drills in on the “Find” action mentioned above, recognizing that KM is more than access to knowledge and information in paper or digital forms, it is also about direct access and formation of connections with the holders of that knowledge. This concept is even more critical with more and more well-tenured experts leaving the workforce and taking their knowledge with them. The more KM can connect holders of knowledge with consumers of knowledge, the smarter an organization is, and the more effective it can be about transferring that knowledge.

KM is important, simply put, because many, if not most, organizations and their employees struggle to perform these aforementioned actions easily, consistently, or at all. Effective KM is that which allows individuals and organizations to perform the actions discussed above in an intuitive, natural, and relatively simple manner.

This is not to say that KM isn’t already happening in any number of good ways. Many organizations with whom we work have already invested significantly in their own KM maturity or are at least ready to do so. When we conduct a KM assessment for an organization we even more frequently find “hero KM’ers” who are doing their best to perform these actions not because it is part of their job description, or because their boss told them to, or because the company processes make it easy to do so, but because they understand their value and are trying. Very few organizations are starting from “0,” and many have the potential to make meaningful steps if they know how to proceed.

Knowledge Management is about mindset and people - not technology

March 13, 2018

This is an article that has been brewing at the back of my mind for a while. As I have engaged with more and more organisations, on the topic of “Knowledge Management Strategy”, it has been proven over and over again that most of us are making the same mistake: We tend to think that transforming our teams and our companies into a “Knowledge-centric organisation” is all about acquiring the latest collaboration tool, or (re-) defining our processes and scorecards. I can tell you with confidence that it is not.

True Knowledge Management is about attitude and mindset above all else. It is about the culture in your organization and whether you and your leadership are fostering an environment that allows people to be truly collaborative. Talking about “growth mindset”, “customer obsession” or having a “bias for action” is all well and good but do you, and more importantly, do your co-workers truly believe it? And are you all living it?

Are your leaders, on all levels, walking the talk? Is your performance management and reward systems set up to incentivize people for impact and results, as opposed to making the score card or blindly following the process? Does your organization and culture encourage people to follow their passions and be creative? Are they allowed to come up with crazy ideas, take risks, fail and learn from it without being punished, just as much as they are rewarded for meeting or exceeding the expectations that “the system” has defined for them? Is doing your job and doing it well more important than taking initiatives and running with your ideas?

Having your leaders demonstrate and live an open and honest, collaborative style, being approachable and open to new ideas and new ways of tackling problems, while recognizing that also a failed initiative has its benefits, is key. As is learning from your own and others’ mistakes.

So, let’s completely ignore the scorecards and incentive compensation models for a while and focus on a few fundamental questions, that may help you start thinking about what company culture you have today and where you want it to be tomorrow:

1.      Do the people in your organisation feel safe to be creative and collaborative? Do they feel safe to fail without being shamed?

2.      Do you understand what motivates people outside of the scorecards and incentive compensation models? On a personal level, not just on a professional level.

3.      Is there room for informal groups to form, address problems and create solutions, even if it is outside the formal company structure? Basically, is there room for taking initiatives?!

A safe environment

I find this is the most fundamental aspect in building a collaborative and knowledge-centric team. If people do not feel that it is OK to ask “stupid questions” or propose “crazy ideas”, you will never build and grow knowledge at an effective rate – you will regurgitate what is acceptable and established but you will not evolve. Classic, hierarchical structures where people feel they have to run everything up and down the chain, before taking action or starting an initiative, is very counter-productive to collaboration and innovation. 

Understanding what drives people

What makes your people tick? What makes them jump out of bed in the morning and be truly inspired to do their best? This is not necessarily all about allowing people to follow their passions (or overlook their day job) but it can be as simple as enabling them to work when and from where they feel the most inspired. Or when/how it best supports their family situation. As long as people do good work and make an impact, does it matter if they do it between 9 and 5 in the office or can they actually do more and better at another time or location? This is where modern workplace tools intersect and can make a difference in how you enable your workforce.

Informal vs formal teams

We talks so much about “diversity and inclusion” in corporate America, and around the world, today but what does that really mean? Does it mean we put quotas on hiring across gender, ethnicity and geography – or does it mean we allow and encourage people to connect and collaborate with people they think can help solve a specific business problem? Or take a “crazy idea” from being just an idea to something real? “Cross-team collaboration” is a common term these days but is it something that is pushed for the sake of pushing it, or does it happen spontaneously because your people recognize the value of connecting with people from other teams, geos or companies?

I realise this article probably raises more questions than it provides answers and that was exactly my intent.

I don’t have a silver bullet for you. I am not going to tell you that if you scrap your utilization-based incentive compensation model, and replace it by something else, it is going to solve everything. That may be an idea and something you want to consider but that all depends on what behaviours you see, and the behaviours you would like to see, in your organization. If you want to foster a truly collaborative environment that takes more than modernizing your performance management- or reward system.

You need to think about the culture you have and what culture you want. And hopefully this article will help you start that thinking process!

As always, I welcome your thoughts and your comments - and please remember that all of my thoughts and opinions expressed here are my own and should not be interpreted as official, or a reflection of, those of my employers - past or current.

Measuring the Effectiveness of Your Knowledge Management Program

February 14, 2018

The ability to measure the effectiveness of your Knowledge Management (KM) program and the initiatives that are essential to its success has been a challenge for all organizations executing a KM Program. Capturing the appropriate metrics are essential to measuring the right aspects of your KM Program. The right metrics will facilitate a clear and correct communication of the health of the KM program to your organization’s leadership. In this post, I will identify metrics (or measurements) of four key initiatives of most KM Programs. These initiatives are: Communities of Practice, Search, Lessons Learned, and Knowledge Continuity.

Community of Practice (CoP) Metrics

Typical CoP metrics include:

Average posts per day, Unique contributors (People posting at least once), Repeat contributors (People posting more than once) and Majority contributors (Min people for > 50% of posts).

Some points to consider:

  • Recognize the diversity of interests in those participating in the group, and that this is a voluntary undertaking for all involved.
  • Develop a stakeholder classification and perform a RACI assessment for each stakeholder group.
  • Through a collaborative process, arrive at coherent goals, objectives, principles and strategies for the group.
  • Develop a CoP plan with agreed upon moderator criteria and stakeholders that influence group behavior in ways that are congruent with the group’s goals and objectives.

Search Metrics

Search Metrics are determined through Tuning and Optimization

Site Owners/Administrators should constantly observe and evaluate effectiveness of search results. Site Administrators/Owners should be able to gather Search Results reports from the KMS administrator periodically (every two weeks). From these reports, they can analyze the type of keywords users are searching for and from which sites most of the search queries come from. Based on this, Site Administrators/Owners can add ‘synonyms’ for their sites. If any newly added metadata column needs to be available in Advanced Search filters then the request must be sent to the KMS administrator.

Search Metrics

  • Search engine usage – Search engine logs can be analyzed to produce a range of simple reports, showing usage, and a breakdown of search terms.
  • Number of Searches performed (within own area and across areas)
  • Number of highly rated searches performed
  • User rankings – This involves asking the readers themselves to rate the relevance and quality of the information being presented. Subject matter experts or other reviewers can directly assess the quality of material on the KM platform.
  • Information currency – This is a measure of how up-to-date the information stored within the system is. The importance of this measure will depend on the nature of the information being published, and how it is used. The great way to track this is using metadata such as publishing and review dates. By using this, automated reports showing a number of specific measures can be generated:
  1. Average age of pages
  2. Number of pages older than a specific age
  3. Number of pages past their review date
  4. Lists of pages due to be reviewed
  5. Pages to be reviewed, broken down by content owner or business group

User feedback – A feedback mechanism is a clear way to indicate if staff is using the knowledge. Alternatively, while many feedback messages may indicate poor quality information, it does indicate strong staff use. It also shows they have sufficient trust in the system to commit the time needed to send in feedback

Lessons Learned Metrics

Lessons Learned Basic Process: Identify – Document – Analyze – Store – Retrieve

Metrics are determined and organized by key fields from the lessons learned template and includes responses gathered during the session. Lessons Learned should be identified by Type of lesson learned captured (i.e., resource, time, budget, system, content, etc.). Summarize the lesson learned by creating a brief summary of the findings and providing recommendations for correcting the findings (i.e., Findings – a summary of the issues found during the review process; Recommendations – recommended actions to be taken to correct findings). In order to provide accurate metrics the approved actions should be documented and tracked to completion. In some cases the approved action may become a project due to high level of resources required to address the finding. Some metrics include: Impact Analysis (time (increased/decreased), improper resourced, budget constraints, software/system limitations, lack of available content, etc.); Applying lesson learned: % of Problem/Issue solved with lesson learned per category and overall.

Knowledge Continuity

The keys at the heart of knowledge continuity include:

  • What constitutes mission-critical knowledge that should be preserved?
  • Where is the targeted mission-critical knowledge and is accessible and transferable?
  • What decisions and action are required to stem the loss of valuable and in many cases irreplaceable knowledge?
  • Successfully obtaining, transferring, and storing the lessons learned and best practices from their most experienced and valuable workers to a knowledge-base or (KM Application) before employees depart or retire?

Some Metrics Include:

  • Percentage of knowledge harvested and stored from key employees.
  • Percentage of knowledge transferred to successor employees.
  • Cost associated with preventing corporate mission-critical knowledge from loss
  • Provides a structured framework and system to store, update, access, enrich, and transfer to employees to support their work activities
  • The amount of ramp-up time of new hires, moving them rapidly up their learning curves and making them more productive sooner

Let me know if you agree with the metrics identified here and/or if you know of additional metrics within these key initiatives that must be captured. I look forward to your responses.