James William Martin

Business Transformation Professional, Author and Educator

Writing hand (ca. 1891–1941) drawing

“Either write something worth reading or do something worth writing.”
–Benjamin Franklin

Continuous Improvement Operating Model

It is important to use the right tools and methods to identify root causes and drive toward sustainable process improvements. This is especially true in today’s technology ecosystems where work is often virtual and solutions are automated using Robotic Automation. Big Data analytics and Agile methods are technology examples used in these ecosystems. Speed of root causes analysis and solution implementation are also important. In my new book, Lean Six Sigma for the Office, 2nd Edition on Page 5, I show how several process improvement methodologies can be integrated into a continuous improvement operating model. Overall, using a combination of Lean Six Sigma, Agile, and other process improvement methodologies can help organizations stay competitive and adapt to the changing business landscape. Your book sounds like a great resource for anyone looking to improve their processes and drive continuous improvement.

Using process improvement methods to increase productivity

Like many organizations that deployed continuous improvement in the 1980s, AlliedSignal realized significant productivity benefits by ensuring strategic alignment and rapid project execution. That is why more than twenty years later it remains strong in organizations that effectively deploy it. An effective LSS deployment should produce between 1% and 2% year-over-year productivity. Twenty years ago, at AlliedSignal, we did ~4%.

Overall, AlliedSignal’s success with Lean Six Sigma is a great example for other organizations looking to improve their productivity and efficiency through continuous improvement methodologies. By following best practices and deploying Lean Six Sigma strategically, organizations can achieve significant productivity gains and remain competitive in today’s fast-paced business environment.

Organizational change

Research shows that a large-scale change program often requires between five and twenty years to translate into new behaviors as measured by performance metrics. Execution-type cultures tend to react more quickly and effectively than those not aligned with metrics. Organizational change occurs if most people within an organization consistently practice the new behaviors. According to John P. Kotter in his book titled, “Leading Change”, there are eight key characteristics of a successful change initiative. In addition, organizations should identify the key success metrics needed to evaluate organizational change effectiveness. Examples include the percentage of people effectively using new tools and methods and cumulative business benefits by type.

Customer surveys 

Customer and stakeholder surveys have become very popular in recent years since Frederick F. Reichheld’s 2003 publication of the Harvard Business article titled, “The One Number You Need to Grow”.  A quotation from his article states, “By substituting a single question for the complex black box of the typical customer satisfaction survey, companies can actually put consumer survey results to use and focus employees on the task of stimulating growth.”  One key point from the article is that truly loyal customers tend to buy more over time and as their incomes grow, they spend more with companies they feel good about.

Their research showed one question in particular, “How likely is it that you would recommend [company X] to a friend or colleague?” was a good predictor of repeat purchasing. The research team found three logical clusters: “Promoters,” the customers with the highest rates of repurchase and referral, gave ratings of nine or ten to the question. The “passively satisfied” logged a seven or an eight, and “detractors” scored from zero to six.” The net promoter score (NPS) is the % promoters minus % detractors (ignoring neutrals). A subsequent exhaustive analysis of more than 400 companies over 12 industries confirmed the NPS predictive model.

Complexity 

A phenomenon called “diffusion of responsibility” occurs in situations where there are many people; but, without clearly defined roles and responsibilities. In these situations, each person thinks someone else will take responsibility, but, no one does. This phenomenon is dependent on group size.  Interestingly, people who witness such events would act by themselves or in the company of only a few people. Diffusion of responsibility also occurs in organizations and work teams with poorly defined roles and responsibilities. It is one of several cognitive influences affecting how people work and make decisions. 

People, directly and indirectly, influence the complexity of products as well as services and systems used in design, production, and distribution. Complexity results from combinations of cognitive, group, and organizational influences in association with technologies and their use.  Depending on the alignment of risks e.g. recurrence and other types, the resultant failures may be catastrophic.

What are some solutions? Simplification, standardization, and mistake-proofing are effective preventive strategies, but, they may not be enough. There are other effective tools and methods for helping reduce complexity. But, depending on the system, catastrophic failures may still occur. A major reason for failures is an overreliance on technology rather than an understanding of social and psychological influences. 

Operational assessments to identify projects

Operational assessments identify and align improvement projects for Business Process transformation initiatives. A well-done assessment provides a roadmap for focused projects that create tangible business benefits. Two key activities are necessary for the early stages of an assessment. The first requires working with an organization’s executives to ensure strategic goals align across the local assessment teams. Integral components of the assessment include quantitative analyses of financial and operational reports as well as analyses of major process workflows. 

Big Data versus Six Sigma Analytics

The current tools and methods of Lean Six Sigma training leave unanswered complicated but important questions. Few Lean Six Sigma belts have the training, tools, or methods to work through big-data environments and analytics. New tools and methods require using different statistical software, large data repositories, and analytical sandboxes, conditioning different types of data, and using advanced statistical methods such as data mining.

The newer process improvement skills for investigating big-data processes are the acquisition, storage, searching, analysis, reporting, and visualization of data as well as its transfer between source and consuming applications as well as users of various types.

Searching in these environments requires new tools and methods. In addition to file size and data structure, the resultant analyses are complex. In the past 30 years, specialized analytical methods have been created for structured, semi-structured, and unstructured data formats. Structured data are a common format. Such data often require some transformations, but these are straightforward. Once converted into a structured data format, conventional statistical methods familiar to Lean Six Sigma belts can be applied directly to complete an analysis or build models.

Occasionally, there are requirements for the transformation of semi-structured data into structured formats. Parsing of data fields is one common solution. However, semi-structured data cannot always be transformed into patterns or models without using newer methods. Examples include the parsing of text and numbers using specially designed algorithms or searching for the number of times phases appear or their inter-relationships.

Unstructured data have many formats. Examples include information contained in books, journals, documents, metadata, health records, audio, video, analog data, files, and unstructured text such as the body of an e-mail, webpage, or Word document. 

The need for big data analytical methods will grow in the coming years. Belts will need additional training in these methods to frame questions and obtain answers for increasingly large and complex process problems which will challenge organizations.

Customer Experience Mapping

Customer Experience Mapping (CEM) is a joint supplier-customer workshop, like a Kaizen event, in which the key touch points between the organizations are mapped to identify gaps within touchpoints embedded within the sale, purchase, delivery, and use of products and services. A CEM is built with higher-level process steps in sequence. Beneath each step is one or more touch points colored-coded and marked up to identify gaps, moments of truth, and other evaluation or prioritization criteria. 

An advantage of the CEM approach is that customer needs and expectations become clearly understood through mutual interaction and agreement. CEM also becomes a long-term road map or model from which to continuously improve the customer experience.

CEM information can be integrated into a supplier’s formal “voice of” programs. For example, voice-of-customer (VOC), voice-of-partner (VOP), and the voice-of-field (VOF). These “voice-of” programs capture metrics that measure customer relationships from perspectives of loyalty and transaction experience using interviews, electronic surveys, and analyses of returns, allowances, warranty information, and other methods discussed earlier.

An effective CEM program helps validate information collected by passive data-collection methods. Ideally, in the aggregate, the information will enable an organization to effectively focus its Business Process Transformation initiative to improve Total Customer Experience (TCE).