Tuesday, February 02, 2010

To Infer or Not; How to discriminate between conflicting hypothesises

Test of sidewiki to link to my blogs on blogger and another SEO link to my http://scorecardstreet.spaces.live.com blog.

Bayesian inference is highly debated of recent since new research including Functional MRI test which revealed spatially specific attentional modulation in human primary visual cortex!!! A hypothesis I held for a long time without the means to do the primary research required to use a more objective method of induction like the scientific method postulates.

Thus, this new piece of data allow me to repeat my estimation in my degree of belief in my hypothesis' validity.

:) Aah, yes, and its only Tuesday!

Laura Edell
lauraedell@dashboarddesigndiva.com

in reference to:

"where"
- Bayesian inference - Wikipedia, the free encyclopedia (view on Google Sidewiki)

Wednesday, June 13, 2007

Friday, June 08, 2007

Not all the glitters, is gold, and so is true in a call center. Most performance management and incentive programs focus heavily on the financial or monetary benefits to help incentivize agent performance. But, in a black belt project, you focus on ways to drive down costs, so I ask you, can it be done?

The answer is YES! Especially, in your offshore and outsourced operation centers. If you have every traveled to meet your teams overseas, you will find a wonderful group of self-motivated and driven individuals who want to exceed at running your account, to the contrary the popular belief system that agents are always looking to cut corners, 'game the system' or do whatever they can to get off of the phone. While the latter may hold the most "somewhat truth", the real opportunity lies in the environment that they work. Take a look around the center and ask yourself a few critical questions -- Do you see a lot of recognition on the walls, i.e. portraits of high performing agents? Do you hear about a lot of motivational events, like 'Agent Appreciation Week' or 'XXX Food Cookoff Competition' or any SWAG give away in which the agents can win logo prizes? What managers who have preconceived notions may scoff at (bad black belt if you are scoffing at any improvement effort without first analyzing the data -- but shush, I won't tell)

Most performance lies within the agent; it starts with tenure and moves into morale as they become more seasoned. Don't worry if you have just received a six sigma project related to improving agent performance without raising salaries because softer things like I mentioned above, do make a difference. You will just need to set up interviews with agents to prove so. I suggest stratifying your data by the following three dimensions: a) tenure, b) current performance (high performer, low performer or average) and lastly c)agent name (for identification purposes). Your output should be agent satisfaction scores (C-Sat surveys given to customer based on their experience with an agent during a call or email) but if it AHT, that is fine, except I would break it down into the components of AHT (see previous post) since AHT by definition is an AVERAGE. Lastly, overlay all of this using the Matrix Plot graphing option in Minitab to view interactions between each input dimension and output. Then, interview those that fall into the following buckets - newbie, high performer vs. oldie, high performer; newbie, low performer vs. oldie, low performer and compare drivers. You should ask questions that really get them thinking and make sure you set up a "safe environment"; i.e. one that is off of the floor, possibly in a conference room, so they feel free to answer honestly. Reassure them that their answers are anonymous and for research purposes only and really try to relate on their level before launching into your questions. Like I said before...Not all that glitters, is gold and such is true with what will drive your human performance related six sigma projects.

Saturday, March 03, 2007









flowergirldujour.blogspot.com


--> check it out for the latest in my life sagas...

Tuesday, January 30, 2007

Laura's Case Study on how scorecards can help organizations with their PM; moving away from reports and spreadsheets in order to answer the "WHYs"...

http://download.microsoft.com/documents/customerevidence/22483_Expedia_Case_Study.doc http://download.microsoft.com/download/c/d/f/cdfa926a-d3af-4120-89fb-7c3bbac5c771/Technical_value.ppt

Laura Was Published in eWeek!

www.eweek.com/article2/0,1895,1877881,00.asp - 73k

Laura Was Published in InformationWeek!

http://www.informationweek.com/shared/printableArticle.jhtml?articleID=188703431

Laura Was Published in TDWI:

http://www.tdwi.org/news/display.aspx?id=8233

Wednesday, January 10, 2007

Scorecards Anonymous in a Web 2.0 world...?
Recently, I was asked to comment on what I thought about Web 2.0 -- that's like asking what my opinion is on internet surfing or white rice...plain, generic and filled with subliminal ignorance of one trying to sound informed yet missing the point by a long shot. In order to understand my thoughts on Web 2.0, one needs to hone in on what specific aspect -- the user-generated content is quite different than user-friendly and helpful widgits which is equally as different from user-generated pages --- or wait...is it all that different...?

If you ask some, they will passionately argue for the new benefits that 2.0 will offer the end user...If you ask others, they won't have the first clue what you are talking about, thinking Web 2.0 is some new software program that Microsoft has just released. And if you ask others, they might claim to be the all knowing source of information on the subject, say, a Maven (with kudos to the 'Tipping Point' author Malcolm Gladwell) wanna-be, who, at the risk of sounding daft to others around them, a surpremely driving force in their lives, that is, looking intelligent to others in all topics rather than honing their skills on just one, might respond that Web 2.0 neither merits the accolades of the truely knowing nor warrants the staunch defense of the nonsensical-ness of offering users the ability to upload home movies a.l.a. 'America's Funniest Home Videos' or personal feelings through diary blogging a.l.a. 'Sex and the City' meets 'Girls Gone Wild'.

In the end, what I am writing and what you are now reading, is, in its most pure form, a type of web 2.0 offering, the self-branded, scorecard-focused, self-generated content which aims at both delivering a high-value information store to other scorecard-ers world-wide, as well as provides a platform by which, as Tom Peters would credit, yields its author, me, an opportunity to brand themselves as a topic Maven...All of this, before the Internet of the future blows up and into proportion, where Interent searches are intuitive and frustration levels are low; where content seems to "auto-magically" be pulled for you based on your preference requirements and finally, pushed out to you, with one crucial differances than what you might say is the way it is done today...the key...le clef...on demand, rather than statically rendering the same set of content rendered to all users of that Internet page. Not customized at all to your liking / preferences. Nothing is easy...everything has a price...can scorecards be set up as intuitively, say, as a widget on an existing page...? something users can drag and drop at will...? the future is near and it is bright...check out MS PreformancePoint 2007...not quite there, but close.

Saturday, December 23, 2006

While walking through the streets of Hermanas, a small town outside of Cape Town, South Africa, I wondered to myself, "How much has the PM movement infiltrated a country so far removed from the average Americans frame of reference?" (to settle those of you who have just read that statement and felt your hackles go up, let me follow this up by stating that this is usually the case, unless those Americans are from the financial, medical or telcom business sectors of corporate America - admit it...you still think there are lions, elephants and giraffes roaming the streets of South Africa, which, for those of you who do not know, is a country -- yes, a country...not just a reference to the SOUTHern part of the continent itself).

OK, now that we have covered our geography lesson for the day, let's get back to he question at hand...What do you think...? Do you think BI and performance management have made as much movement down here as they have in the US, especially over the course of the last few years?

At first, I will admit, I didn't think so...There are still many regions in the US that wouldn't have the 1st clue what PM and BI stand for, let alone how to employ the strategic advantage that each presents. But I was wrong...

Dichotomy is defined as "a division into two non-overlapping or mutually exclusive and jointly exhaustive parts. They are often contrasting and spoken of as "opposites". The term comes from dichotomos (divided): dich- ([in] two) temnein (to cut)," by wikipedia (en.wikipedia.org/wiki/Dichotomy).

Business intelligence, on the other hand, is defined as "the process of gathering information in the field of business. It can be described as the process of enhancing data into information and then into knowledge. Business intelligence is carried out to gain sustainable competitive advantage, and is a valuable core competence in some instances," en.wikipedia.org/wiki/Business_intelligence

If you want to think about the ubiquitous impact of BI in your workplace, all you need to do is ask yourself "have I ever opened or reviewed a spreadsheet," or is it. Many people think that just because numbers appear in the A3 cell of an Excel worksheet, that they are looking at data. And while in it's most raw format, a singular number in a cell could be seen as data, how often is that the case in Excel, versus, say a formula showing up in A3, something like =A2 + B2 which results in a singular number display for the end user, or you.
OK that was a bit difficult for the non-analyst to understand. To break it down, simply take the following case:

If you have 2 apples and 3 oranges and you want to know how much fruit you have in total for your restaurant, you might enter the 'data' into a spreadsheet that looks something like this:

In A1, B1 and C1 cells, enter 'Apples', 'Oranges' and 'Total', respectively. These serve as your headers.
Underneath those headings in A2, B2 and C2, enter '2','3' and then stop. What do you enter in cell C2?

You could easily eyeball 2 + 3, and enter a hard-coded '5', but is that the most useful way to enter DATA into a spreadsheet? What happens if the farmer that serves up your fruit makes a mistake and sends you the 3 apples and 4 oranges? When you enter the new numbers, does the 'Total' column change along with the new values?

The answer is no. But if you were to enter into C2, the following formula: = A2 + B2, the end result is still 5 from the first example, but when you modify A2 and B2 with the new values of 3 apples and 4 oranges, it automatically recalculates the total to equal 7. This is the relativity of information; or the non-absolute transformation of data.

This is a simplistic example but serves an illustrative purpose. In one case, the absolute reference to 5 total pieces of fruit is useless in my eyes for most organizations that utilize long-term or big picture strategy maps to manage performance. Why?
Because the analyst man-power to continually update cells as the data changes is huge, and unproductive, when a relative reference (or using the formula rather than the hard-coded value) is a one time effort, that utilized properly, services the organization to allow analysts to actually analyze rather than just report on information. Most companies utilize their analysts mostly as reporting analysts, rather than data mining experts or statisticians, as they should be utilized because of time involved with manual ETL; (or the extraction of the data from the source systems, followed by the compilation, computation and validation of the data integrity, all of which is still not a guarantee that there won't be calculation errors). By the time this process is done, the analyst is usually racing to meet some arbitrary or pre-determined timeline in which they haven't the time to actually draw conclusions from the data that they have now transformed into information.

If you think about the two definitions that I led off with, does this dichotomy within most companies translate into analysis strength or weakness within an organization?

I would say the latter...

There are only a few companies that get "it"; and invest in performance management systems, balanced scorecarding, strategy maps tied to performance indicators, with line of sight through the business units, tied to employees and downstream, through to the end transaction and customer experiences and perceptions of how well you serve their needs. Whew...that was a mouthful...And as I always stress, one should never go down that path without having first built the system manually, and felt the pains of manual ETL and aggregations, as who better to help serve up which product / platform you should choose to RFI during the software selection process that those analysts who do the work itself? Also, if you automate without a manual framework, you have no roadmap by which to build your rules or needs from; you are left flying blind.

This would be like driving on a windy road at night with a blindfold on...Figuratively, it is corporate suicide.

Yes, there are those companies that buy PM or BI software, only to latter complain that it 'doesn't work'. This is usually a false statement driven by the lack of understanding of a BI product or platform, and more of a complaint about how an organization decides to implement that platform within their IT infrastructure and the subsequent processes and business rules built into the BRE (business rules engine or 'If this, then what').
But when you do have your manual system in place, and you do start your PM process, say with a product like Microsoft's Business Scorecard Manager (BSM) which is currently being productized as 'Performance Point 2007' (don't get me started about how excited I am about having the power of being an OLAP administrate and report designer (due to the Proclarity acquisition) all in one with a nice little scorecarding package, and oh, did I mention, integrated with the Office platform, what more could a BI dork like myself ask for?), you, the end user or analyst reading this blog, becomes the organizational fire preventor and not the fire fighter of the past, enabled by technology to do your jobs' better.

When data is transformed into information systematically, you are left with a resource pool that can actually utilize those Harvard degrees, modeling out 'what-if' scenarios based on information served up at a click of a button. It is the difference between data and information that motivates the "cream of the crop companies" to achieve stretch goals and BHAGs...

All in all, a dichotomy exists between the manual and systematic approaches to analysis of information and transformation of data into such information as what serves up PM for any organization. By conquering and dividing out this duplicity, you are left with the strength and knowledge that outweighs what most of your

Monday, October 23, 2006

Sales folks present an interesting dichotomy in nature; these hunting pariah, tracking their consumer prey; asking whether you like the Sonics, you falling victim only once, but painfully so that one time: here's how to avoid the same: while shooting an air pistol followed by the audibly annoying sound their forefinger makes when depressed on their "hot" shoulder (think 'stzzzzzz' sizzle sound that one makes when you do something awesome - like you are the sh*t), just know that they are about to tell you that they scored two court-side tickets because they have self-proclaimed "connections in the biz," waiting for you to bite at their bait before asking you to tag along;
OK - seems harmless, no? Absolutely, not!! Here are the aforementioned clues to look out for:
1. First, the air pistol and "drop it like it's hot" finger sizzle are both played out;
2. Second, sales folks can be disingenuous at a disproportionately higher rate than many other, equally popular industries;
3. Third, while you may think they will leave work at work, come on now, they are sales people after all, often carrying the belief that one must "sell" at all times, everything in their life; playing devil's advocate, one might say that this isn't so good, as they tend to "fake it 'til you make it" and in some situations, this tactic, when employed tactfully, can be extremely powerful for the people involved.
4. Fourth, people skills beyond that of the norm, which are innate, by definition, for the good ones, intrinsically generated and often represent a skill-set that cannot be learned or taught to others (at that level) ;
Sorry to break it to ya...

Wednesday, July 19, 2006





Saturday, July 08, 2006

I often am amazed when I find out how advanced we are at EXPE with regards to our Scorecard program -- Over the course of the last 2 years, I have presented at a handful of conferences, including CFO, CIO magazines, ICPQ Quality, and BSCOL, where multiple people would approach me afterwards to ask me how they can mirror our work.

Well, let me start by saying that it has taken us almost 3 years to work through the issues that come when any company deploys a brand new quality program or measurement system -- Often, you'll have integrity issues with the data requiring ETL or transformation/cleansing of the data, something that often is hidden within disparate databases, silo'ed across your company. Next, you have the problem of 'jumping into the weeds' too soon Vis a vie the overexcitement that comes with having clean data ready to extract and publish in a balanced scorecard format. I find that people, when presented with the option of what data should constitute their most important performance indicators or KPIs for their scorecard, will act as if it is a 'Chinese Menu' and try to order 1 of everything, thus bogging down the scorecard with too much information. While anyone who reads my blogs know how I feel about business metaphors like 'jump into the weeds', it is apropos to mention at this point that 'staying at the 30,000 or 60,000 foot view' for as long as you possibly can is critical. I recommend not having the project manager or program manager who will eventually maintain the ongoing balanced scorecard from an administrative perspective be involved with the strategic decision making with regards to determining relevant and strategic KPIs that cover all 4 perspectives outlined by Kaplan and Norton's balanced scorecard methodology. And, you do not have to launch out of the gate with the perfect version of your scorecard with all 4 perspectives included. At EXPE, we started with the VOC, or Voice of the Customer, perspective, followed by our finance perspective, internal voice (i.e. your employees)/learning and development. While we weren't truly balanced when we launched nor were our KPIs perfect from the get-go, we took the next 3 years to manually create and iterate on the scorecards while we went across the company on a 'road show' to build support and executive buy-in to our program. This is the next piece that is critical for any quality program to be successful. As Fred Reichheld talks in 'Good to Great' and 'Built to Last', what is the point of quality if it isn't tied to the bottom line? Quality for the sake of quality is a slippery slope (oops...another metaphor), often causing companies financial pain if they go with guns blazing into any new quality program without first understanding how the VOC impacts the top and bottom lines.

Being the solutions oriented person I am, at this point, I would be asking 'this is all great - - But how exactly do you tie VOC with the strategic vision of your company'?

Answer: it's not easy...and while I may sound redundant, it took us 3 years before we A) got tied into the strategic planning process, B) got the entire company engaged and enthused by the methodology all leading up to my 3rd point, C) even thought about automating the process.

After 3 years, we finally automated as part of the early adopters program for Microsoft's BSM product (code named Maestro during the beta program) -- For a fraction of the cost of Cognos or Business Objects, other software vendors who offer scorecard modules, Microsoft developed a flexible and user-friendly tool for connecting to multiple different data sources and data types (i.e. relational (SQL/Oracle), multidimensional or cube data (i.e. Analysis Services) or even flat file and manual entry. No matter how technical your company or you are, you can use the software to help you.
But I stress...Do you automate at the same time as you build your program, even if your executive sponsor or boss approves buying the software ahead of time. If you do not have both executive buy-in and company wide adoption of your scorecard program, you will end up with a 'cool tool that people find interesting' that over time, becomes less and less important to the stakeholders, especially if you see a lot of turnover or mergers/acquisitions within your org. In the end, in order to be a truly balanced view, it must roll all the way up to the top of the food-chain; you will know you have achieved nirvana when CEO level planning and executing strat-planning is based on KPIs measured on your Scorecard. There is no one view that everyone must look at: CEO's will want a company/brand/lines of business overall view; but would a call center agent find that useful? Not so much. They care about their AHT and adherence to schedule, so their view would be very different. But it is all very do-able with MS's Business Scorecard Manager (or BSM) product.

For reference, check out BSCOL or Balanced Scorecard Collaborative (bscol.org) -- They have a great website full of information, as well as having cunning conferences during the year that bring together the minds of some of the leading experts in the scorecard space, including the father's of the methodology, Norton and Kaplan (my personal heroes)!

Saturday, February 25, 2006


Interestingly, I have seen in my travels through the random world of Operations and Contact Centers (a forcibly implemented change from the vanilla-version, call center), a true need for process excellence. Yet, of recent, 'process excellence' has become a trend, rather than a disciplined methodology or philosophy.

This 'flavor of the month' mentality has been around forever, but with business book publishing so popular, the methodologys of TQM, CMMI, CRM, CEM and others, have breezed through many contact centers like a hurricane. Often accompanied by SWAG ('stuff we all get', or the PC definition of 'S'), agent rewards, and other external motivators. Often, centers are dedicated and a large kickoff celebration usually ocurs.

The Executives think that the call center agents (get the subtle joke?) are into the Pomp and Circumstance, and while some agents definitely buy in (mostly for the free give-aways, and why not? I was once an agent too!), I would say that my experience has shown me that most, including myself, are skeptical or simply do not care. If they have any tenure under their belt, agents will tell you an ear-full about the 1 to 2 year trends that have swept through their call centers. And when a new program pops up spouting 'Customer is #1' or 'We love our Agents'...'Customer Retention is Most Important', the agents know that under the covers, are still the demands of 'Reduce your Handle Time' or 'WE HAVE 50 CALLS ON HOLD' (which incidentally, often is also spouted by a ticker tape monitor that relays the current calls holding as if we were monitoring stock values like a broker's office at the NYSE).

And, while most centers have seen short-term financial gains or have been able to quantify the 'soft savings' from increased customer retention or propensity to repurchase, over the long-haul, these programs have bordered on beinga 'flavor' rather than a true cultural shifter.

And, then, without notice, Six Sigma swoops down on the centers. And, believe me, I support the initiative...I am one of the black belts; but fancy myself different, having started from the ground floor as an agent in a different company and have rode the corporate wave to a new, Interent based travel company in a new role. This is where the term ' Ignorance is Bliss' originated, I'm sure; people not wanting to know both the blessings and the curse of launching a new quality program in an existing center.

Best to start with Design for Six Sigma when you first launch the centers. But when that isn't an option (probably 98% of the time), we have other things to do to win over the agents.

1) Schedule a meeting with one or two agents (1 with high quality) and a new hire (fresh-eyes perspective), a trainer and a supervisor. The objective is to discuss ways to reduce waste in the center (don't focus only on handle time...it is the inverse of First Call Resolution).
2) Ask for ideas of things that delay them on the call and afterwards. Note them on the 3M flip boards that come with the sticky side for easy attachment of the sheets to the wall.
3) Create 3 columns: 1 that says 'Noise' and 1 that says ' Controllable' and 1 that says ' Standard Operating Procedures (SOP)' -- Add each of the ideas from Step 2 into one of these columns. Circle those that are in the agent's control (Controllable) in a color -- Circle those in a different color that are in the Exectives control (SOP) and cross off those that cannot be controlled by the agent. Focus on the ones that are under the agent's control and the business' control (unless you have items in the Noise column that you find out later can be addressed by agents or management; if so, move it to the corresponding column and out of 'Noise'). Take the items in 'Noise' and rewrite them onto a new sheet -- Title this sheet "Issues out of Agent's Control" and move it tot the side - You will be revisiting this sheet in Step 9.
4) Ask for ideas and ways you can help. Note them on the flip boards.
5) Encourage the team to visualize every call as having three steps in a storyboard -- Think of yourself as a screenwriter and the caller is Hollywood -- This is key:

A. What does Hollywood want? An Action, Romance, Comedy...(In terms of the center, it means 'What does the customer want' A Refund, Change, Recap...)

B. What did we agree to do? (List what you did in Step A along with the associated procedures that are written -- NOT the actual steps you [the agent] took. That comes in Step 3).

C. What did we do? (List what you actual did for what you wrote in Step A).

The gap between B and C presents an opportunity for improvement, that is both measurable and scope-able.

Agents who focus on these basic steps will listen more closely, ask enough questions and keep promises. The result is likely to include reduced handle time; fewer repeat calls, transfers and increased FCR.
6) Ask for ideas to test this concept for reducing handle time. Conducting a pilot with one team may be a great place to start.

7) Develop a Communications Plan and Strategy -- Communicate this approach to one team, share the results of your meeting and ask them to pilot this concept for 30 days, is one idea I have read about. We actually work with our Communications Manager at our company for internal, Operations/Contact Center broadcasts and with our Internal PR team for anything that goes Company wide.

Let your center director know about this project and promise to inform her about the results. You may want to incorporate this new philosophy into your training and coach supervisors on using this new approach with their respective teams. A good communications strategy is key to building confidence in all of your center management groups.
8) Then look at the result. ACD reports may reflect an increase in handled calls by each agent and reduced TT or talk time.
9) Last but not least, we are not ignoring the Noise items. As the final step, take that flip chart titled "Issues Out of Agent Control" and address them one at a time! Stack rank them in the order of Risk -- How? Group the agents together and create a FMEA (Failure Modes Effect Analysis) -- The Result? RPN scores or Risk Indicators.

These are based on taking the Occurence (rated 1 to 10, 10 meaning a lot of occurences) X Severity (same scale as occurance) X Detectibility (rated 1 to 10, where 10 means NOT DETECTIBLE)..This is a bit different in understand than 'O' and 'S'...The product of these 3 numbers is the RPN or Risk Indicator. Sort these in descending order for a stack ranked list to help you work on the Issues that impact the agents the most first.

All in all, including the agents upfront in the planning and strategy of a Six Sigma deployment, is far more effective than swoping in with a new quality program. SWAG, however, is still good!

~Laura

"If you can't describe what you are doing as a process, you don't know what you're doing" -- W. Edwards Deming