How would you like to be a Guest Blogger for KMI? Email us at: info@kminstitute.org and let us know your topic(s)!

What is Knowledge Management and Why Is It Important?

April 3, 2018

As I’ve often asserted, Knowledge Management struggles with its own identity. There are any number of definitions of KM, many of which put too much stress on the tacit knowledge side of the knowledge and information management spectrum, are overly academic, or are simply too abstract. At Enterprise Knowledge, we’ve adopted a concise definition of knowledge management:

Knowledge Management involves the people, process, culture, and enabling technologies necessary to Capture, Manage, Share, and Find information.

The actions at the end of that sentence are the most critical component. All good KM should be associated with business outcomes, value to stakeholders, and return on investment. We discuss these actions as follows:

  • Capture entails all the forms in which knowledge and information (content) move from tacit to explicit, unstructured to structured, and decentralized to centralized. This ranges from an expert’s ability to easily share their learned experience, to a content owner’s ability to upload a document they’ve created or edited.
  • Manage involves the sustainability and maturation of content, ensuring content becomes better over time instead of becoming bloated, outdated, or obsolete. This is about the content itself, its format, style, and architecture. Management also covers the appropriate controls and workflows necessary to protect it, and the people who may access it.
  • Share includes both an individual’s and organization’s ability and capacity to collaborate and pass knowledge and information via a variety of means, ranging from one-to-one to one-to-all, synchronous to asynchronous, and direct to remote.
  • Find covers the capabilities for the knowledge and information to be easily and naturally surfaced. The concept of findability goes well beyond traditional “search,” and includes the ability to traverse content to discover additional content (discoverability), connect with experts, and receive recommendations and “pushes.”

We’ve taken this simple definition as the foundation for what we call our KM Action Wheel. The Action Wheel expresses the type of actions we seek to encourage and enable for the organizations and individuals with whom we work. It adds a bit of additional specificity to the aforementioned:

  • Create recognizes that a key element of good KM is not simply the capture of existing knowledge, but the creation of new knowledge. This can take a number of forms, from allowing knowledge creation by an individual via innovation forums or social reporting, to group knowledge creation via better and improved collaboration and collaboration systems.
  • Enhance focuses on the fact that effective KM will lead not just to the creation and capture of knowledge, but the sustainable improvement of that knowledge. In short, this means creation and stewardship of the leadership, processes, and technologies to make information “better” over time rather than having it fall into disrepair. Content’s natural state is entropy, and good KM will counteract that. Enhancement also covers the application of metadata, comments, or linkages to other information in order to improve the complete web of knowledge.
  • Connect drills in on the “Find” action mentioned above, recognizing that KM is more than access to knowledge and information in paper or digital forms, it is also about direct access and formation of connections with the holders of that knowledge. This concept is even more critical with more and more well-tenured experts leaving the workforce and taking their knowledge with them. The more KM can connect holders of knowledge with consumers of knowledge, the smarter an organization is, and the more effective it can be about transferring that knowledge.

KM is important, simply put, because many, if not most, organizations and their employees struggle to perform these aforementioned actions easily, consistently, or at all. Effective KM is that which allows individuals and organizations to perform the actions discussed above in an intuitive, natural, and relatively simple manner.

This is not to say that KM isn’t already happening in any number of good ways. Many organizations with whom we work have already invested significantly in their own KM maturity or are at least ready to do so. When we conduct a KM assessment for an organization we even more frequently find “hero KM’ers” who are doing their best to perform these actions not because it is part of their job description, or because their boss told them to, or because the company processes make it easy to do so, but because they understand their value and are trying. Very few organizations are starting from “0,” and many have the potential to make meaningful steps if they know how to proceed.

Knowledge Management is about mindset and people - not technology

March 13, 2018

This is an article that has been brewing at the back of my mind for a while. As I have engaged with more and more organisations, on the topic of “Knowledge Management Strategy”, it has been proven over and over again that most of us are making the same mistake: We tend to think that transforming our teams and our companies into a “Knowledge-centric organisation” is all about acquiring the latest collaboration tool, or (re-) defining our processes and scorecards. I can tell you with confidence that it is not.

True Knowledge Management is about attitude and mindset above all else. It is about the culture in your organization and whether you and your leadership are fostering an environment that allows people to be truly collaborative. Talking about “growth mindset”, “customer obsession” or having a “bias for action” is all well and good but do you, and more importantly, do your co-workers truly believe it? And are you all living it?

Are your leaders, on all levels, walking the talk? Is your performance management and reward systems set up to incentivize people for impact and results, as opposed to making the score card or blindly following the process? Does your organization and culture encourage people to follow their passions and be creative? Are they allowed to come up with crazy ideas, take risks, fail and learn from it without being punished, just as much as they are rewarded for meeting or exceeding the expectations that “the system” has defined for them? Is doing your job and doing it well more important than taking initiatives and running with your ideas?

Having your leaders demonstrate and live an open and honest, collaborative style, being approachable and open to new ideas and new ways of tackling problems, while recognizing that also a failed initiative has its benefits, is key. As is learning from your own and others’ mistakes.

So, let’s completely ignore the scorecards and incentive compensation models for a while and focus on a few fundamental questions, that may help you start thinking about what company culture you have today and where you want it to be tomorrow:

1.      Do the people in your organisation feel safe to be creative and collaborative? Do they feel safe to fail without being shamed?

2.      Do you understand what motivates people outside of the scorecards and incentive compensation models? On a personal level, not just on a professional level.

3.      Is there room for informal groups to form, address problems and create solutions, even if it is outside the formal company structure? Basically, is there room for taking initiatives?!

A safe environment

I find this is the most fundamental aspect in building a collaborative and knowledge-centric team. If people do not feel that it is OK to ask “stupid questions” or propose “crazy ideas”, you will never build and grow knowledge at an effective rate – you will regurgitate what is acceptable and established but you will not evolve. Classic, hierarchical structures where people feel they have to run everything up and down the chain, before taking action or starting an initiative, is very counter-productive to collaboration and innovation. 

Understanding what drives people

What makes your people tick? What makes them jump out of bed in the morning and be truly inspired to do their best? This is not necessarily all about allowing people to follow their passions (or overlook their day job) but it can be as simple as enabling them to work when and from where they feel the most inspired. Or when/how it best supports their family situation. As long as people do good work and make an impact, does it matter if they do it between 9 and 5 in the office or can they actually do more and better at another time or location? This is where modern workplace tools intersect and can make a difference in how you enable your workforce.

Informal vs formal teams

We talks so much about “diversity and inclusion” in corporate America, and around the world, today but what does that really mean? Does it mean we put quotas on hiring across gender, ethnicity and geography – or does it mean we allow and encourage people to connect and collaborate with people they think can help solve a specific business problem? Or take a “crazy idea” from being just an idea to something real? “Cross-team collaboration” is a common term these days but is it something that is pushed for the sake of pushing it, or does it happen spontaneously because your people recognize the value of connecting with people from other teams, geos or companies?

I realise this article probably raises more questions than it provides answers and that was exactly my intent.

I don’t have a silver bullet for you. I am not going to tell you that if you scrap your utilization-based incentive compensation model, and replace it by something else, it is going to solve everything. That may be an idea and something you want to consider but that all depends on what behaviours you see, and the behaviours you would like to see, in your organization. If you want to foster a truly collaborative environment that takes more than modernizing your performance management- or reward system.

You need to think about the culture you have and what culture you want. And hopefully this article will help you start that thinking process!

As always, I welcome your thoughts and your comments - and please remember that all of my thoughts and opinions expressed here are my own and should not be interpreted as official, or a reflection of, those of my employers - past or current.

Measuring the Effectiveness of Your Knowledge Management Program

February 14, 2018

The ability to measure the effectiveness of your Knowledge Management (KM) program and the initiatives that are essential to its success has been a challenge for all organizations executing a KM Program. Capturing the appropriate metrics are essential to measuring the right aspects of your KM Program. The right metrics will facilitate a clear and correct communication of the health of the KM program to your organization’s leadership. In this post, I will identify metrics (or measurements) of four key initiatives of most KM Programs. These initiatives are: Communities of Practice, Search, Lessons Learned, and Knowledge Continuity.

Community of Practice (CoP) Metrics

Typical CoP metrics include:

Average posts per day, Unique contributors (People posting at least once), Repeat contributors (People posting more than once) and Majority contributors (Min people for > 50% of posts).

Some points to consider:

  • Recognize the diversity of interests in those participating in the group, and that this is a voluntary undertaking for all involved.
  • Develop a stakeholder classification and perform a RACI assessment for each stakeholder group.
  • Through a collaborative process, arrive at coherent goals, objectives, principles and strategies for the group.
  • Develop a CoP plan with agreed upon moderator criteria and stakeholders that influence group behavior in ways that are congruent with the group’s goals and objectives.

Search Metrics

Search Metrics are determined through Tuning and Optimization

Site Owners/Administrators should constantly observe and evaluate effectiveness of search results. Site Administrators/Owners should be able to gather Search Results reports from the KMS administrator periodically (every two weeks). From these reports, they can analyze the type of keywords users are searching for and from which sites most of the search queries come from. Based on this, Site Administrators/Owners can add ‘synonyms’ for their sites. If any newly added metadata column needs to be available in Advanced Search filters then the request must be sent to the KMS administrator.

Search Metrics

  • Search engine usage – Search engine logs can be analyzed to produce a range of simple reports, showing usage, and a breakdown of search terms.
  • Number of Searches performed (within own area and across areas)
  • Number of highly rated searches performed
  • User rankings – This involves asking the readers themselves to rate the relevance and quality of the information being presented. Subject matter experts or other reviewers can directly assess the quality of material on the KM platform.
  • Information currency – This is a measure of how up-to-date the information stored within the system is. The importance of this measure will depend on the nature of the information being published, and how it is used. The great way to track this is using metadata such as publishing and review dates. By using this, automated reports showing a number of specific measures can be generated:
  1. Average age of pages
  2. Number of pages older than a specific age
  3. Number of pages past their review date
  4. Lists of pages due to be reviewed
  5. Pages to be reviewed, broken down by content owner or business group

User feedback – A feedback mechanism is a clear way to indicate if staff is using the knowledge. Alternatively, while many feedback messages may indicate poor quality information, it does indicate strong staff use. It also shows they have sufficient trust in the system to commit the time needed to send in feedback

Lessons Learned Metrics

Lessons Learned Basic Process: Identify – Document – Analyze – Store – Retrieve

Metrics are determined and organized by key fields from the lessons learned template and includes responses gathered during the session. Lessons Learned should be identified by Type of lesson learned captured (i.e., resource, time, budget, system, content, etc.). Summarize the lesson learned by creating a brief summary of the findings and providing recommendations for correcting the findings (i.e., Findings – a summary of the issues found during the review process; Recommendations – recommended actions to be taken to correct findings). In order to provide accurate metrics the approved actions should be documented and tracked to completion. In some cases the approved action may become a project due to high level of resources required to address the finding. Some metrics include: Impact Analysis (time (increased/decreased), improper resourced, budget constraints, software/system limitations, lack of available content, etc.); Applying lesson learned: % of Problem/Issue solved with lesson learned per category and overall.

Knowledge Continuity

The keys at the heart of knowledge continuity include:

  • What constitutes mission-critical knowledge that should be preserved?
  • Where is the targeted mission-critical knowledge and is accessible and transferable?
  • What decisions and action are required to stem the loss of valuable and in many cases irreplaceable knowledge?
  • Successfully obtaining, transferring, and storing the lessons learned and best practices from their most experienced and valuable workers to a knowledge-base or (KM Application) before employees depart or retire?

Some Metrics Include:

  • Percentage of knowledge harvested and stored from key employees.
  • Percentage of knowledge transferred to successor employees.
  • Cost associated with preventing corporate mission-critical knowledge from loss
  • Provides a structured framework and system to store, update, access, enrich, and transfer to employees to support their work activities
  • The amount of ramp-up time of new hires, moving them rapidly up their learning curves and making them more productive sooner

Let me know if you agree with the metrics identified here and/or if you know of additional metrics within these key initiatives that must be captured. I look forward to your responses.

Dancing With the Robots - the Rise of the Knowledge Curator

January 31, 2018

“Nearly half of HR and business leaders who were surveyed believe many of their core HR functions will be automated by 2022.” (Source: Harris Poll).

So let’s pause and reflect on this seemingly widespread sentiment. There are indeed a number of largely Administrative HR functions that easily lend themselves to automation, such as: providing the latest regulatory position on Maternity Leave and Maternity Pay, booking a Holiday, or providing basic payroll information. It makes perfect sense to deliver this information to employees via automation (e.g. a chatbot) on a self –service basis. And well designed and deployed technology can analyse results and improve the quality of the answers it provides.

Of course this rapid, exponential automation is not going to be restricted to employees in fact it is going to occur even faster when we look at consumers. Last year Amazon in the UK extended its warehouse automation to its delivery service with its first successful drone delivery. The delivery took 13 minutes and did not involve any humans from the supplier side. 

From 3D printing to self-driving cars, automation is happening and it is happening fast and it is unstoppable. It will therefore affect all aspects of our lives in our different incarnations as worker, citizen and consumer.

The problem is that we as humans want (even need) to have it both ways. When it suits us we want and prefer to interact with a machine over a human but again when it suits us we want to be able to immediately switch the service mode and interact with a human.

David's Story

To illustrate this problem, let’s go back to the HR automation situation. In our story a successful sales person (David) is going through a difficult divorce that is affecting his health. David is a highly valued employee and the company wants to be supportive of him during this time. However, quite understandably he does not want his marital difficulties and medical treatment discussed with anyone other than his current direct manager (Kathy). Between them they agree that in the short term David should be allowed to take time off whenever he feels unable to perform his company duties to an acceptable standard. Kathy says she will speak to someone senior in HR and let them know that this arrangement has been approved by the business but that the matter is to be treated with the utmost discretion. 

In the weeks that follow David books days off via the HR Chatbot whenever he needs it. In fact, the somewhat impersonal, discreet aspect of the chatbot works really well for him given his situation. His colleagues are none the wiser which is important to his self-esteem and position at work. 

One day David puts in for another day off and this time the request is rejected by the machine. It has been programmed to spot certain patterns and reject requests accordingly. The machine tells him that the matter has been escalated to his manager (Scott?!) along with David’s absence pattern report. Unfortunately Scott was David’s former manager and is someone with whom David has a strained relationship and who he would definitely NOT want to know about his personal circumstances.  David tries desperately to stop the machine and reverse the request but to no avail. Matters are made worse when his attempts are seen by the machine as a sign of an unsatisfied employee which then triggers an automated ‘rate this service’ survey email. David’s emotions switch from anxiety to anger and he completes the survey is a very negative fashion.  The low NPS (Net Promoter Score) as it were further generates a courtesy call from someone in HR in charge of low NPS follow up who neither knows David nor his current situation.  David has to deflect the caller and is forced to make up a cover story in order to avoid revealing to a relative stranger why he is unhappy with the system and why he is putting in for so many days off so close together. He then has to write an email to Scott and cc Kathy explaining the error and asking him to kindly ignore the absence report (but alas the damage is done).

Dancing with the Robots 

David’s story is hardly unusual and one can cite many more scenarios where the machines will fail to read the individual and their specific circumstances correctly. We are humans after all and our ever changing needs are precisely what define us as humans not machines.  Accordingly it is a no brainer that we will need to have people in place that can ‘dance with the robots’ and thereby offer a hybrid service which encompasses the best of humans (compassion) and the best of machines (efficiency).

OK so let’s play David’s story out again but this time with what I call a Knowledge Curator in place, in other words a human who is trained to dance with the robots (we’ll call her Cyd as in the great dancer Cyd Charisse).

When David first raises his personal situation with his manager (Kathy) and they come to an agreed arrangement, Cyd is either directly or indirectly notified of this employee change. Now Cyd does not need to know why this pattern change has occurred and been approved, she simply needs to know that it has happened and to think about its implications. Her job as a K-Curator is to identify ALL points (human and machine) that will be affected by or will impact this change. So knowing that David will be putting in for unplanned days off at short notice she is aware that the chatbot and its connection to the holiday booking system will not be set-up to cater for this. Cyd will make the system adjustments and at the same time she will also double check on all the supporting information for David which will help her identify that the system has the wrong direct manager details for him. She can also think ahead and remove the trigger for the NPS survey. It is interesting to note that this K-curator role is cross departmental and requires the ability to understand the business, HR policy and process automation. 

Conclusion

With the exponential increase of company automation and the reduction of human capital expenditure, it is completely logical and essential for this process to be managed by an increase in headcount of these new Knowledge Curators. They will cover all matter of cross departmental scenarios when human needs internal and external are serviced in part by robots.  Like Cyd in our story they will all become superb dancers able to lead and follow their robotic partners as the company pursues its irreversible path to higher levels of automation. 

KMI Interviews with Recent CKM Students

January 2, 2018

As we enter the new year, KM Institute begins our 17th year of offering the flagship Certified Knowledge Manager program.  Well over 8,000 students have now taken our courses globally.  We have expanded our global reach with public and private courses conducted in West Africa, Europe, India and the Middle East, in addition to the United States.  New partnerships are forming in these regions as well as South America and South Africa.

To keep pace with this growth, we have expanded our instructor base beyond our Chief Instructor and KMI Founder, Douglas Weidner, to include KM experts both here in the U.S. and abroad.  At the conclusion of a recent CKM class, we interviewed a few of the students to get their feedback.

Please take a few moments to view the video below.  Enjoy!  And thank you to our students for participating in the interviews.

For more info on the CKM program, click here.