An Example of the Pain Associated with an IT Upgrade

Last blog entry I discussed why IT upgrades or system changes often result in pain. This time I will give an example to illustrate the common issues that companies face when they attempt to upgrade a system.

In this example, the vendor’s legacy system provides tabular reports in flat text files and XML based data extracts. This worked well given the structure of the system’s database (mainframe based) until the volume processed by the system grew dramatically–a reflection of the customer’s growing business. The customer made a contractual commitment to upgrade to the next generation system provided by the same vendor which is capable, foremost, of handling the larger volume.

The newer system’s ability to process larger volumes of transactions is based on a three tier client-server architecture with a standard relational database management system and application servers in the middle tier. The new database was designed to handle large volumes of data with very low latency. As such, the database design is optimized for transaction processing.

To support reporting, the system replicates transactional data to an online data store (ODS) which is essentially a copy of the transactional database. To improve reporting response times for a select number of the reports, the vendor added a data mart which pre-aggregates some of the data that requires substantial processing prior to display in reports. The reporting system pulls data from the ODS and data mart and displays the results in a web browser interface. The developers that produced the reports took advantage of the new more flexible browser interface and designed many of the reports so that their structure was no longer tabular. For example, many reports in the new system display summary rows amongst detail rows and allows users to collapse the report to display only summary rows, or open select details.

To replace the data extracts from the legacy system, the vendor offers customers the ability to query the ODS directly on the new system. Because the ODS is hosted on an industry standard database system, the opportunities to extract and load the system’s data into other systems are much greater than with the XML extracts on the legacy system. To take advantage of these changes, however, the customer must change their approach to extracting data from the system. Some customers embrace such change. Others prefer that the new system maintain strict compatibility with the legacy system, perhaps even at the cost of performance.

One customer expected that they would obtain the best of both worlds: Dramatically improved performance and 100% (or very near) compatibility.

The resulting pain is based on their somewhat unrealistic expectation. This expectation may have resulted from invalid assumptions on the part of the customer, misleading impressions from the vendor, or more likely, both.

In particular, the issue arose when the customer realized that the new reports produced output differently than the legacy system and thus all of their internal processes that pulled information from reports and loaded the data to other systems would not longer work. Rather than investigate the potential benefits of the ODS, the customer opted to rely on contractual terms in an effort to force the vendor to enhance the functionality of the new system. As you might imagine, this approach proved time-consuming and painful.

IT Systems Upgrades: How to Avoid the Pain

Over the course of 10 years in Information Technology, I have found that the process of upgrading or migrating to new systems is almost always painful for most of the people involved. In particularly challenging cases, that pain extends also to customers. The way to mitigate or avoid some of this pain is to ensure that an organization makes a solid case for what they gain and lose from the upgrade/migration and that they set realistic expectations for how their internal processes will need to change to integrate the new system.

Why is upgrading usually painful? In my experience, the pain is usually felt due to two sets of circumstances. The first is that newer versions of a product usually implement the same functionality differently. Often this is because a newer version attempts to solve additional problems or it uses a different or newer technology. The result is that any processes that a business has developed using the old system, such as reporting or workflow, will be disrupted. A similar issue often occurs when the replacement system comes from a different vendor.

The second set of circumstances relates to the business processes that evolve around a system. Often, the new system will offer a different set of interfaces or produces output differently. Because of this, an organization’s internal processes will not interface seamlessly with the replacement system.

When should organizations upgrade or migrate to new IT systems? Often decisions to upgrade will be made based on a somewhat superficial evaluation. For example, one company I worked for decided to upgrade one of it’s systems almost immediately following the vendor’s announcement that they would be producing a new version based on the newer platform. Management made the assumption that the older platform would be phased out quickly. There may also have been a little machismo in the decision (e.g., “We’re always first, always pushing the envelope…”) In fact, the vendor did not end up phasing out the older product quickly and early releases of the new product lagged behind in terms of stability and functionality in some areas.

An organization should conduct a reasonably thorough systems analysis that assesses the benefits and costs of moving to the new system. This analysis should include information related to how existing internal business processes and systems will have to change. Where possible, the organization should quantify or at least clarify the expected revenue generating or cost saving benefits that result from the new system. I do not suggest that such assessments are ever particularly objective, but clarity is a good first step and can help identify potential pitfalls. If, for example, the analysis cannot point to any compelling reasons to move to a new platform (e.g., an existing platform provided by a vendor has a very specific sunset date, the new system provides opportunities to dramatically increase revenue or decrease costs) perhaps a move is not in the best interest of the organization.

I have also seen some circumstances where organizations have had to come to grips with their expectations for a system’s useful lifetime. At my current company, we have a customer that is using our last generation product at volumes well beyond what it was designed to handle. Because their business is growing rapidly and they rely on our system to generate revenue, they must upgrade to our latest generation system. While rapid growth is not always predictable, in their case we could have planned a little more carefully so that the upgrade would taken on such an acute flavor.

What can organizations do to avoid some of the pain associated with an upgrade? I have found that the ideal situation is for a systems analyst to outline the benefits and costs of the new system in terms of revenue and capital/operation costs. If you can articulate the top and bottom line impacts of a new system and the net result is positive, your basis for migrating or upgrading is simplified. You can convey this vision to your organization and it can help to smooth out the bumps that you will surely encounter. It is not a panacea–change is still difficult for most people. But you will not likely have to fight the subversive challenges that come along when people realized 70% of the way through a migration that the new system costs more and does little to increase revenues.

In some cases you cannot tie clear lines between the system upgrade, changes in revenue generation and operating expenditures. In these cases, you best bet may be to clearly articulate the cost savings over and above the current system.

In less glamorous cases, you are essentially forced to upgrade just to keep pace. In these cases, you will probably want to investigate changing to systems that will simultaneously reduce operating costs. If there don’t seem to be a lot of options out there to gain at least minor cost reductions, it may be time to revisit the organizations business model. I realize that’s easy to say but hard to do. However, these cases usually work out best when you work with executives to develop a solution rather than just ask for more capital or bigger operating budgets.

Assess The Value of a Business Initiative

My former manager had posted on his computer monitor a simple note that caught my attention one day several months back. The note was a one line type-written equation that read as follows:

Value = Acceptance x Impact x Execution

I asked him what this equation meant and he explained it roughly as I describe below.

Value can refer to many things but in our case, let’s apply it to a Six Sigma project or program. Value, in this case, is a measure of the benefits that the project or program brings to the company.

Acceptance is the extent to which employees, leaders, and team members of the Six Sigma project support the project. Like it or not, the extent to which others at your organization support a Six Sigma project determines whether it will succeed or fail. Having a sense for this at the outset of the project can help the team determine whether they should proceed with the project.

Impact is the extent to which the project or program decreases costs, increases revenue, or positively impacts business metrics used by your organization. If an initiative does not impact one of these important metrics, the initiative provides no more than subtle, immeasurable benefits to the organization.

Execution refers to how well the business initiative, a Six Sigma project in this case, is implemented. If your project actually fails to effectively find efficiencies that benefit the business in a tangible way, it is likely because your execution was not effective. This can take many forms with a Six Sigma project including, but not limited to, a poor problem definition, failure to effectively analyze the problem to ensure it exists, use of poor measurement systems such that your experiments result in no improvement in your output variables, etc.

Now this may seem obvious but I must note that if any one of the three input factors in this equation are zero, the business initiative will provide no value.

This little equation can prove helpful in assessing which projects are most worthwhile before you even begin implementation. You can simply ask yourself whether acceptance, impact, or execution run any significant risk of equaling zero. If so, you might be wise to work on another initiative first.

I have found this equation useful in determining how to use my time. For example, a colleague came to me and asked for help analyzing some data from a test his team had conducted. I quickly realized that they had not controlled their input variables and suggested to him that instead of analyzing their original hypothesis (which they couldn’t given the lack of control), that instead they spend their time analyzing why they had trouble controlling their input variables. In this case, the question of why they had trouble controlling the input was a useful exercise because it helped them identify a potential opportunity to address a part of the process that was not working well.

Corporate Information Assets: Why Are They Disconnected and Messy?

I had a few second thoughts about my original notes on corporate information assets. These second thoughts came when I took on a recent project to assemble a data set from several other databases.

I collected the requirements for the database from the requestor and then identified where we might obtain each of the data elements. I then identified the groups from which I would need to obtain access for each of these data sources. This is when the issue revealed itself.

Several of the groups that owned data I needed for the project held exclusive control of their databases. Furthermore, the groups were not inclined to give me or any other analysts access to their data.

In each case, their overt reasoning for withholding access was that their data were complex and required specialized knowledge to use the data appropriately. They had experienced problems in the past where other groups had misused their data in the past and drawn invalid conclusions, etc. They opt, therefore, to avoid having to trust anyone with a rather restrictive policy of no access for anyone.

I suspect that in some cases they also have a covert reason for withholding access. They may, for example, prefer to maintain a monopoly on their data so that they are in on the credit for any analysis that uses their data.

Given these circumstances, I have to question or at least qualify my previous assertion that each business should operate independently. In the case I cite above, separate departments hold exclusive hold on data sources that could be useful in concert with other data sources for improving quality of products or processes. As such, a policy of everyone owning their own data may not serve an organization well in cases where some departments refuse to share their data.

What, then, is the solution? In my case I have to appeal to executives to clear the way. This is not my preferred way of working because it usually leaves the department holding the data with bad feelings.

Ideally, a ranking executive in the organization would specify that all departments are to share data so that analysts and black belts would not have to resort to appeals for help from executives after rebuffs from data-holding departments. That is, unfortunately, rather optimistic and usually unrealistic.

Project Team Size: Smaller is Often Sweeter

How often have you seen managers make a decision–or more likely, fail to make an effective decision–by calling a meeting and inviting lots of people? At least two people from each of several cross functional groups are invited even though that may be redundant. The meeting consists of at least ten people. The meeting is set to discuss a process and the problem is not explicitly defined.

The meeting lasts for sixty or ninety minutes during which time possible strategies are discussed. Others point out the weaknesses and risks of strategies presented by others. The conversation loses focus of the issue at hand several times. At the end of the meeting, the meeting organizer or leader of the functional group responsible states something along the lines of, “This was a good discussion. Let’s meet again next week and discuss further.”

Then next week comes around and, if the leader actually followed through, the meeting reconvenes. Several of the previous week’s attendees are out on vacation or sick leave and others have joined or filled in for those that are missing. The meeting follows the same track as before except that some participants become a little more frustrated than the previous time because of the lack of progress. Many of the same topics and issues are discussed. The meeting ends the same as the previous meeting with the leader noting, “Good discussion. Frank, will you please set up a meeting for next week so we can continue our discussion?”

And so this dismal pattern continues until finally a higher level leader cites a different issue as the most important item for the organization’s focus. Or, the leader that owns the functional group responsible either forgets to set up a new meeting or forgets to assign a team member the task.

Or, in an information technology organization, how often have you observed a company assemble a large group to develop software? Because they are a large group (after all, the projects they will deliver are large projects) they will enjoy efficiencies as a result of economies of scale. But, 18 months later, the group is disbanded when higher level leaders find that the group has not produced any tangible deliverables. I have seen this a few times from both the inside and outside.

Large groups are often inefficient at certain types of work. This flies in the face of what you learned in operations management class where economies of scale associated with large, pooled-resource work groups are the mantra. Six Sigma projects almost always work better with relatively small project teams of between two and four core members. Likewise, many types of software development projects (not necessarily all), even those that seem relatively large, often work better with smaller development teams.

In the example of the strategy meeting above, there are several issues that prevented the group from achieving success. Putting aside things like a lack of agenda, ineffective meeting management, etc., one of the main issues was the size of the group. Why is it that larger groups, which arguably consist of a greater number of skills when you consider each of the participants, seem to have a harder time delivering results?

The problem lies between the people. And I mean more than just communication. Lots of communication, as demonstrated above, does not always result in everyone agreeing on how to improve a process–in fact, larger groups tend to have a wider array of conflicting opinions. It is, therefore, helpful to have a leader that has final decision making authority. Then, if the group is stuck, the leader can take input from each team member and then make a decision for the group.

Communication is also a factor. It is much easier for a smaller group to stay in sync during a project simply because there are fewer people to inform regarding changes. There also tends to be more self-enforcing accountability in a smaller group. If I do some work that creates a problem for the project, I have less chance of hiding or passing blame for that work. I have more vested interest in resolving the issue because if I do not, the colleague with which I work every day must. Because I rely on my teammates (smaller teams must, of necessity, rely on one another more than larger groups) I am much more likely to feel obligated to resolve any issues associated with my work that might trip up my colleague.

I have worked on several different software development teams and the one that was most effective in terms of delivering a product that customers used and appreciated was a small 2 person team that eventually grew to three people. The odd thing was that we delivered a working product much more quickly than the larger groups with which I worked even though the projects were similar in terms of size. The smaller team was able to make changes and enhancements much more quickly as well.

Six Sigma projects tend to be similar–if you have to spend significant time working out internal team process issues (as you are more likely to face with larger groups), you are less likely to make tangible progress.

Obviously, you have to have a large enough group to handle the task at hand. A high ranking (or at least very effective) champion that truly supports your process improvement goal is a necessity to remove barriers so that you do not need several additional team members to handle this work. And you need to cover all of the basic skills necessary to complete a project (e.g., someone that can access the necessary data, an analyst capable of valid statistical tests, etc.)

Summary: Keep your core project team small enough to operate efficiently and just large enough to have all the skills necessary.

Tests and Problem Definitions: Perspectives From Six Sigma

In most business organizations, quality departments or staff conduct tests aimed at improving the company’s bottom line. Various methods exist for designing such tests and some produce useful results more than others depending on the type of process. In many cases, test designers make the mistake of designing tests without first clearly defining the problem that they are attempting to address.

At my company, I find that many leaders and project managers design tests using assumed problem definitions. As the problem definitions are usually unspoken, they are also usually vague. Unfortunately, this puts the test designer at a significant disadvantage.

For example, in one case a leader assigned a project manager on his team to test how providing tiered customer support would save the company money. The manager assumed that the current (non-tiered) support model was more expensive than a tiered model but did not define the problem more clearly. As a result, the project manager floundered for several weeks before the so-called β€œtest” was abandoned. The main issue was that the project manager could not design an effective test because he was not clear on what needed to be tested.

In addition to leaving test objectives unclear, not formally defining the problem up front often leaves the test design team in a muddle over which input and output variables the test should use. This was one of the major stumbling blocks for the project manager that attempted to test the tiered support model. If it is unclear which variables the test will employ, it is impossible to design effective analysis methods and overall test designs.

Another pitfall associated with not defining the problem is that the test team is at greater risk of losing track of the original goal. This became clear in the example above when a process analyst investigated the project and began asking questions. The frustrated project manager explained that the test had morphed to a test to simply determine what issues might arise when they employed a tiered support model. He also had to admit that he did not receive a clear objective from his manager in terms of the perceived problem and was severely confused.

The test also suffered from scope creep. That is to say, the project manager quickly realized that there were several questions he could attempt to answer with the test and planned to address each question in turn. Scope creep can quickly move a project or test into a dangerous state of affairs where little is accomplished because the staffer or team responsible spends all their time simply managing the test requirements.

The bottom line: Before you attempt to design a test, be sure to clearly define the problem in one or two sentences. This will dramatically improve the chances of a successful test from the perspective of test design, analysis methods, focus on the original objective, and avoidance of scope creep.

A Jack Handy Moment – Randomness and Infinity

Hang on to your socks for this one… ’cause it hurts my brain just trying to type it.

I had a Jack Handy moment the other day. Remember Jack Handy? The character from Saturday Night Live? Anyway…

I was sitting in a classroom half listening to a fellow student attempt to kiss up to the teacher by droaning on and on about financial theories and randomness and unpredictability. My mind started turning over the idea of an unpredictable stock market and all the variables that would have to go into being able to predict its behavior – “chaos theory” popped into my head as a term. Now whether or not “chaos theory” is actually applicable to this topic is beyond me. Sometimes I don’t know if my brain is really good at picking things up and applying them correctly or if my brain just likes to throw things into the mix to make me think I’m smart. My brain likes to mess with me.

I digress…

So as I sat there contemplating how many variables would have to go into predicting the stock market, it dawned on me that the size of the regression equation needed to predict such a thing would just go on and on and on… to infinity! That made me wonder if things we call “random” are really just variables to infinity? Which then made me think of all the times at work I watch people hand wave past “hard problems” as they spout “random” rather than digging into the process improvement – because THEY don’t see it as a process that can be improved. THEY see it as random when in reality, it might just be a really big set of variables but it’s NOT random. THEY are just lazy.

So, if we were to plot the number of variables in a process on the X axis and the loudness of an executive yelling “random” on the Y axis… I suspect we’d get an inelastic curve at around 3-4 variables. Everything beyond that on the X axis would result in one, giant, RANDOM scream from executives.

Fishbones and Post-It notes

It was time I actually wrote something useful. πŸ™‚

I thought talking about an easy way to do fishbone (cause & effect) diagrams might do the trick. I’ve used this technique many times and it works like a charm. No muss, no fuss. And more importantly, no fighting!

What you’ll need:
— a bunch o’ square Post-It notes (not the tiny kind you can barely write on)
— pens
— a giant, wall-sized white board (if you don’t have those, tape up butcher paper to a wall so you can write on it)
— Dry Erase markers (or regular magic markers if using paper)
— your project team πŸ™‚

Sequester your team in a conference room with the white board or wall o’ butcher paper. Write your output measure(s) on the white board to refresh everyone’s memory. Explain that you all are about to do some ‘silent brainstorming’. Pass a small hunk of Post-Its around to each team member and tell them to start brainstorming ANY input variable they can think of that might affect the outputs on the board. Reinforce the ‘brainstorming’ part and the ‘silent’ part. One idea per Post-It.

GO!

Let them scribble and think and scribble and think some more. Let them keep going until you hear practically no more scribbling. This part usually lasts around 5 minutes, maybe 10 if they are feeling extra creative.

Then have EVERYONE go stick their notes all over the white board. No particular order. Just stick ’em up there.

Now, if you have a small team, keep them all at the white board and have them start grouping like items together. If you have a large team, ask a couple of them to stay while the others sit down. I try to pick people who have been really quiet. This get them participating and involved.

As they group them, have them read the notes out loud. This stimulates discussion and more Post-Its might be thought of – or some of the Post-Its might be removed upon second thought. BUT, here’s a rule: only the author of a Post-It is allowed to remove it!

You might have to help them a bit here or there. The goal here is to start getting the ‘bones’ of your fishbone. You are aiming for 4-6 major ‘bones’ with as many ‘little bones’ as you’d like.

Once the groups/bones have started to take shape, take a dry erase marker and write the category (or bone) name above/below each group of notes. Connect the groups into bones and join it to the output measures! You have a fishbone!

The teams really love this exercise. It involves them all. The ‘silent’ portion of it allows them all a voice via the Post-It notes. And you have instant buy-in because the entire team helped create it.

Process – a four letter word

Have you ever noticed how much you can twist your face up when someone says “what’s your process”? It’s like you’ve been character assaulted with the insinuation of actually using a “process”. You don’t need no stinking process! You do everything by the seat of your pants! You’re smart and in charge! Winging it! Making it up different everyday! Whatever feels good at the moment is what YOU do!

Not.

Everybody follows a process for just about everything they do. I think the only time you aren’t really following a process is if you’ve never done “it” before and even then you are probably relying on a “similar experience” where you used a process! Hell, we even have processes for figuring out things we don’t have processes for. So don’t go telling ME you don’t have a process. Humph.

You follow a process to brush your teeth. Don’t you always put the toothpaste on the brush before you put the brush in your mouth? Now granted, our personal tooth brushing processes might deviate here or there (like I prefer to wet my toothbrush before and after I put toothpaste on it – call me quirky) but we all follow some similar basics.

Same goes for getting dressed. I’d suspect you put your underwear on before your pants, right? (and I don’t wanna hear about those people who don’t wear underwear, thank you very much) Again, some may put on one sock and one shoe rather than two socks then two shoes. I mean, we aren’t machines. But we DO follow processes!

Can you imagine what driving would be like without processes? Or shopping? Or banking? Or how would you like to fly on an airplane if you knew there was NO process involved? Or how about a surgery?

I’m thinking you wouldn’t like it very much. As a matter of fact, I think you’d avoid it like the plague.

So, the next time someone approaches you about capturing “your process” and perhaps “improving your process”, don’t twist your face all up and roll your eyes. Instead, grin really big and say “that sounds like fun! I love talking about my processes!”

Why Six Sigma Works

So I was asked if I’d be interested in blogging about Six Sigma. I had to think about it. Could I contribute anything meaningful to the Six Sigma community beyond complaining and whining? Did I have positive thoughts and learnings?

I decided “yes.” (but just barely) πŸ˜‰

Now you are probably wondering if Six Sigma is some colossal mistake or something to be avoided at all costs – not at all. Six Sigma is an outstanding set of tools and methodologies that actually work when used properly in an organization that supports it.

Six Sigma becomes difficult and painful if you and a handful of others are attempting to do DMAIC projects in an organization that has not “bought in” yet. It will drive you nuts to have a great project with business impact and you can’t get living, breathing, thinking bodies to participate on your team. It will drive you nuts to get push back at EVERY turn. “No” to this and “no” to that and “no” just because someone has the power to say “no” and they like to say “no”. (I’m starting to feel like a Capital One commercial) “No” because they don’t see the value but they won’t give you the resources to help you prove the value.

If “they” ever do throw a project your way, the requested timeline is impossible and don’t hold your breath that “they” will actually do anything with your findings. Forget the “I” and the “C” in these types of orgs. They just want you to do the “DMA” part.

When you try to explain why a longer timeline is in order, they roll their eyes and exclaim “this Six Sigma takes too long! We don’t have the time for this! We are running a business and we need answers TODAY!” And I bite my tongue. What is running through my mind is “oh yah?!? Well you sure didn’t squawk when it took you years to make this mess! And look what shooting from the hip has gotten you!” But what comes out of my mouth is yet another compromise for a ridiculous timeline knowing that the chances of “them” using the learnings of this project are that of a snowball’s chances in hell.

The sad part is that a lot of companies want Black Belts around like feathers in their caps. The hopeful part is that there ARE companies out there that have fully embraced Six Sigma and have hugely impacted their bottom line – proof that THINKING and finding some semblance of PROOF and DATA and not tolerating unexplained VARIANCES in PROCESSES actually works!

Which reminds me… I’ll write a bit later about some of my encounters with people when we needed to talk about “their process.” Nice.