Thursday, April 28, 2016

Agile Leadership

The “agile” label has been used heavily in the software development world for many years and is now in vogue for businesses and IT/EA governance programs (and many other disciplines, I’m sure).  This label is meant to describe a trait that all businesses or organizations aspire to be  - responsive, fast, nimble, or in a word...agile.  In essence, “agile” should be as much a leadership philosophy as it is a governance or software development methodology.  If we want our EA programs and our governance programs to be agile, then it’s going to come down to the people - the leadership - to make it happen.  So, what is agile leadership and how is it achieved?

Agile leadership is essentially the antithesis of bureaucracy, and it involves the blending of three major leadership functions that typically suffer in a bureaucratic world; setting expectations, empowerment, and accountability.

  • Expectations – What are the results that you or the organization want to achieve? Know what they are and state them.

  • Empowerment – Do you have talented, experienced, responsible people?  If so, then allow them to make decisions without seeking approval.  That’s what empowerment is.

  • Accountability – If your people know their expectations and are empowered to make decisions, then they must also know they will be held responsible for their decisions.

There may be nothing that will allow an organization to better achieve agility, speed, and responsiveness than the simple actions of setting expectations, empowering the right people, and then holding them accountable.  It also stands to reason that few things will give the people being empowered greater satisfaction. It’s worth pointing out here, the research Frederick Herzberg did regarding job satisfaction (Herzberg, F., 1987) where the top four factors leading to job satisfaction were 1) Achievement, 2) Recognition, 3) Work itself, and 4) Responsibility.

The most important thing to understand about bureaucracies is that they are not out-of-control, uninvited party crashers that ruin businesses.  Rather, they are structures intentionally put in place to achieve a very specific objective – to make sure bad decisions are not made.  The benefit would be great if the costs were not so high.  The price of this desire for near perfect decision making is that large chunks of time are consumed and there is almost no accountability.  When decisions are finally made, after multiple reviews and through the seeking of consensus, the amount of time elapsed often renders the decision irrelevant (or weakened, at best).  Also, who’s accountable?  The answer is largely no one; and when nobody is accountable, progress tends to not take place.

Unfortunately for many, forsaking the stifling and unaccountable safety of the bureaucracy feels risky.  However, for an organization with talented, experienced, responsible people, the risk is easily mitigated.  These people know what needs to be done and how to do it, so to confine them to a bureaucracy is really an absolute shame.  If an incorrect path is taken the sheer speed with which it is taken will usually soften the impact significantly.  In other words failing fast is a lot better than failing slow, or failing through inaction or atrophy (which is the most common outcome of bureaucratic structures).  As agile EA leaders, all we have to do is know our people and empower the right people – the upside in speed and agility, not to mention job satisfaction, will be tremendous.  

References

Herzberg, Frederick.  (1987, Sep-Oct).  One More Time: How Do You Motivate Employees? Harvard Business Review.   Retrieved from http://scholar.google.com/scholar_url?url=https://xa.yimg.com/kq/groups/22741279/1871431599/name/One_more_time.pdf&hl=en&sa=X&scisig=AAGBfm1uUvvj1BuLUUxfm4pTEoVgU2GfjA&nossl=1&oi=scholarr

Friday, April 22, 2016

Diversity of Thought

It seems widely accepted these days that diversity is an asset to a business.  So, how does a small-to-medium sized business (SMB) develop sound IT and business strategies when the leadership headcount is fairly small, IT leadership often is a single person, and there may be a founder-owner who dominates decision making?

I’ve been thinking about this as I reflect on some of the dubious decisions and strategies of my current employer (default management law firm) from the past several years.  Here are a few that dominate my daily existence:
  • We chose to build our own back-office case management system (CMS) when there were proven commercial systems available.  We now have a high cost of ownership compared to the commercial systems, without gaining any competitive advantage.
  • The system mentioned in the previous bullet was built from 2012-2013 as a client-server system, forsaking a web-based solution.  We now experience all of the client-server maintenance issues (400 users spread out geographically) that made web-based applications so attractive to businesses way back in the 1990s.  

  • We chose to forsake investing in a subsidiary law firm’s CMS, with the rationale that in the near future, that system would be replaced.  It’s been nearly ten years, and the legacy CMS is still running with no investment in features and automation, which is severely hurting the subsidiary firm.

  • Several years ago we chose not to migrate from a legacy system because it was time-consuming for IT and the business to figure out the data migration requirements.  At the time everyone was comfortable and familiar with the legacy system, so keeping it around felt like it would be no big deal.  We’re still paying heavily with quality and productivity issues for what amounted to a myopic decision of convenience.

  • A decision was made to build a custom invoicing and check printing system when we already had a commercial accounting system in-house.  This decision was made, again, out of convenience, because figuring out how to interface with the commercial accounting system was considered too challenging.

My intent here is not to cast stones (even though it appears that I am) regarding past strategic initiatives, but instead, it is to show how dubious decisions can be made when there is a lack of thought diversity.

Many enterprises have cultural challenges.  My twenty years at General Motors showed me how the culture of a massively large organization can suffocate change.  At an SMB, change should come more easily.  However, it seems to me that the big risk isn’t enacting change, but rather, it’s with enacting the right change when there is a lack of thought diversity.  At least that’s the conclusion I’m drawing on why so many dubious decisions were made.  Decisions for which a price continues to be paid.

So, what should a resource constrained, diversity-deficient SMB do to mitigate the situation?  Participating in industry trade groups and business associations certainly is a good start.  Creating an advisement board of external resources and finding a similar, but non-competitive business to share and vet ideas with are also good approaches.  Engaging consulting firms (for both IT and the business) for strategy and internal assessments should also be a common practice as it can quickly expose a business to a large variety of choices, across many industries (Bain & Co., n.d.)

Ultimately, the answer to the “diversity of thought” problem is for the business to engage its external environment as much, and in as many ways, as possible.  Business and IT strategies are far too important to trust to so few.

References

Bain & Company. (n.d.). What to expect from management consulting. Retrieved from http://www.bain.com/offices/brussels/en_us/careers/top-management-consulting.aspx

Saturday, April 16, 2016

Measuring The Hidden Factory

A few weeks ago I blogged about the hidden costs (or hidden factory, as Lean Six Sigma refers to it) associated with quality problems, which in my case was inspired by redundant software systems at my current employer.  This week I want to continue with the topic by focusing on measuring the size of the hidden factory, or the cost of poor quality.

When we think of what a company does to add value to a product or service for its customer base, we should look at it as if there are two factories at work.  The first factory is the primary factory, or the visible factory, that executes the normal, value-added processes to produce a value-added output.  The second factory is the hidden factory.  This is the factory that kicks into action when things go wrong and it costs a lot more that what people might think.  My posting on Hidden Costs / Hidden Factory from a couple of weeks ago will give some perspective on this.

Essentially, the hidden costs become much more destructive, simply because they are hidden or unknown.  I think business executives would likely agree that when it comes to running a business, it’s the unknowns that should cause the most worry.  As Plato put it “Better be unborn than untaught, for ignorance is the root of misfortune.” So, to avoid misfortune, let’s explore how we can measure the cost of poor quality.

The following excerpt is from a Quality Digest article (Lee, G., n.d.) that examines how a large, global energy company, ABB, attempts to measure the cost of poor quality.

At ABB we use a metric known as cost of poor quality (COPQ). COPQ is measured by estimating the cost of all efforts undertaken in an organization, including materials and processes used in assembling our products, that don’t provide value to customers. In the lexicon of lean manufacturing, these are nonvalue-added activities. At ABB, COPQ is the sum of all nonvalue-added costs divided by the total revenue that’s generated. The resulting measurement is the percentage of revenue that’s lost due to waste.

COPQ is measured, reported, and tracked in each of the business units. COPQ is used to measure progress within the organization and to identify best practices that can be shared throughout the company. COPQ is a metric and a learning tool, helping an organization to understand what’s nonvalue-added and to establish opportunities for improvement.

My first thought, upon reading the article, is how impressive ABB’s measurement capabilities are, given that they can track and report nonvalue-added costs across all their business units.  The article doesn’t say how ABB did that, but the excerpt does provide us the simple math behind their COPQ metric.  We’ll have to devise our own approach for determining nonvalue-added costs.

I think all approaches, as ABB did, should start with total revenue.  That’s easy.  It’s the cost part that will likely be tough, requiring some creative approaches.  Using my current company, a default management law firm, I’m going to explore what I hope will be a reasonably simple approach.  We have no ability to precisely track value-added costs and nonvalue-added costs, so I need an alternative.  Since our legal processors (they’re the hourly Operations staff that does the work to generate the revenue) spend a lot of time on rework and quality issues, I can’t simply take their total compensation costs and treat it as value-added cost.  Plus there is also staff from Operations management, IT, Accounting, and HR that get sucked into our quality problems.  Their labor needs to be reflected in our nonvalue-added costs.

What I will propose is that my company establish a benchmark cost per legal case type. For example, through a bit of analysis, we may determine that a typical bankruptcy costs us $1,000 to execute, an eviction may cost $1,500, and a foreclosure may cost $500.  Since we know the volume of cases for each type, we then know what our value-added costs should be.

If we take the total costs of the Operations department, plus the total costs of the departments that directly support Operations (IT, HR, Accounting) in the execution of our customer value stream(s), we can then come up with our own COPQ formula that resembles the ABB approach.

W = Total costs of Operations plus supporting departments (IT, HR, Accounting)
X = Total value-added cost of all processed cases (based on benchmarking)
Y = Total revenue

Z (nonvalue-added costs) = W (total costs) - X (value-added costs)


With our nonvalue-added costs established, we can then rejoin the ABB approach and compute our COPQ to equal Z divided by Y, which gives us a metric that we can track over time.  We can further refine the total operating costs by doing things such as taking a portion of HR based on the number of new hires brought into Operations versus the number of new hires across the enterprise.  Basically, we want our total operating costs to, ideally, be associated with our customer value stream(s), and not include overhead for such things as Marketing and new business development.

Measuring the hidden costs of poor quality is not going to be the easiest metric to obtain, but it isn’t unreasonable.  If a company really cares about quality and knowing how big their hidden factory is, then finding the motivation to calculate a COPQ shouldn’t be that hard.  Hopefully, the effort will make the hidden costs a little less hidden and decision-making a little more sound, and maybe, just maybe, knowledge of their hidden costs will allow businesses to avoid a little misfortune.

References

Lee, Gerald. (n.d.). The Hidden Costs of Poor Quality. Retrieved from http://www.qualitydigest.com/dec07/articles/06_article.shtml

Sunday, April 10, 2016

A Nascent Governance Process

Gartner’s slide deck entitled “The Five Step for Defining Effective EA Governance” (n.d.) inspired me to take a retrospective look (using the five steps as my measuring stick) at the IT governance process implemented by my company last summer.  We’re an SMB with no formal EA practice who’s fighting to survive right now, so this is not meant to be viewed through rose-colored glasses or to be cynical in any way.  It’s just meant to be real-world experience from a company trying to find it’s footing when it comes to the business and IT working together more effectively.

Step 1: Incorporate IT and Corporate Governance Principles

We start off the retrospective a little weak with regards to our principles, which are basically implied rather than expressly written down and promoted.  Right now, it’s like a “do the right thing” approach where we all assume we know what the right things are.  There’s no substitute for writing important things down and there’s little doubt that individual perspectives vary widely, so a set of written principles would serve us well.  We’ll need to work on this area.
Step 2: Identify Your EA Archetype

This is an easy one.  Gartner provides a magic quadrant of governance archetypes where the lower-left is the least capable and the upper-right the most capable.  We’re so new to governance that we easily fall into the lower-left.  Right now our governance meetings tend to be focused on providing status updates on recent IT issues rather than on strategic matters.  Given time and commitment, I’m sure we’ll progress to better quadrants.  I’ll know we’re making progress when we spend less time talking about recent failures and more time on planning for the future.

Step 3: Identify Your Organizational Culture

I feel like we’re doing well here.  The Gartner presentation notes (from The Five Step for Defining Effective EA Governance) talk about how organizational cultures often must change to keep pace with the transformations brought on by EA.  In our case, the current transformation taking place isn’t brought about by EA, but is instead about embracing the principles are EA.  In other words, we’re transforming our culture to one where the business and IT work together to plan for change and sound governance is expected.  Our execution isn’t where we want it to be yet, but our cultural commitment feels real, which bodes well for the future.    

Step 4: Identify Your Governance Style

Our governance style is very much akin to a the business monarchy model, which emphasizes a centralized approach, with a tilt toward business representation, and where the executives tend to make the decisions (Broadbent, M., 2002, p. 4).  Even though we’re a collection of five companies plus three law firms doing business in eight states, our governance is completely centralized.  Historically, we have had a diversified operating model, but we are gradually shifting to a unification model, so our centralized governance approach makes sense.  

Step 5: Match Your Governance Style With Your EA Approach

We don’t have a formal EA program, at least by name, but we operate in a de facto manner as an EA program.  We are a collection of business and IT leaders planning for change, suspending, if you will, your recollection of Step 2, where I admitted we largely spend our time talking about recent IT issues.  Our pseudo-EA approach is definitely traditional, with one centralized voice and direction for the enterprise, which, fortunately, matches well with our monarchy governance style from Step 4.

Conclusion

There’s little about our nascent governance approach that is clicking at this point, other than good intent, which strikes me as normal growing pains.  If we stay positive, diligent, and committed, I’m confident we’ll reach the upper-right of Gartner’s EA Archetype Magic Quadrant (Step 2) in due time.


References

Broadbent, Marianne. (2002). CIO Futures– Lead With Effective Governance. Retrieved from http://unpan1.un.org/intradoc/groups/public/documents/apcity/unpan011278.pdf

Gartner, (n.d.). Five Steps for Defining Effective Enterprise Architecture Governance. Retrieved from Penn State EA-872 L08 class material.


Saturday, April 2, 2016

Business Velocity

For this blog post, I want to talk about business velocity, which has been defined in one manner as a company's ability to generate operational speed while heading in the right direction (ebizQ, n.d.). I believe, just like previous posts on complexity and hidden costs, business velocity is one of those hard to see and measure business factors that EA needs to better understand.

The Business Case

This past Friday, I spent some time working on a business case with two representatives from the operations side of the business.  We are looking at automating the submission of invoices for one of our largest clients.  The business folks already had their numbers for how long it takes and how much it costs to prepare and submit an invoice for this client, they just needed some costs from IT for software development, for which I obliged.  The resulting reaction from the Ops folks was basically “eh, not very compelling” followed by “darn, we’re probably not going to do this”.  

The Backstory

The company I currently work for is struggling.  We’re a default management law firm, which means we handle foreclosures, bankruptcies, and evictions for lenders.  As cases proceed, the lenders require us to update their systems so they will always know the status of their loans.  This is very onerous, but it is a requirement for our business.  The default management firms that are thriving today invested heavily in integration and automation many years ago.  They have higher quality, much higher margins, more capital for investment, and the ability to grab market share from us.  We’re playing catch-up.

A Push to Automate

The invoice transaction we were assessing is but one transaction out of hundreds for which we need to update the lender systems, all of them with some kind of measurable average duration.  For the aforementioned invoice transaction, the time duration per submission is three minutes.  Almost all transactions will be in the two-to-three minute range.  Let’s say that, on the whole, we have three hundred unique transactions for updating lender systems, on the average, each transaction takes three minutes, and we have, on average 1000 such transaction each day.  What if by investing in IT automation and integration, we shaved two minutes off each transaction? That would be 2,000 minutes, or 33.3 hours, or 4 FTEs of savings each day.

What struck me about our business case exercise, is that it was based strictly on the transaction time and nothing else.  However, since our objective as a company is to automate several hundred transactions, I started wondering if the productivity impact starts to accelerate as we progress with our automation initiative.  I don’t believe the operating improvements remain constant.  As we automate more transactions, the productivity of the enterprise, I believe, will improve beyond the measured minutes, and will actually start to accelerate beyond the original measurements, however, not in a way that is easily predicted or measured.

My point is simple - the work being automated becomes less tedious, turnover is reduced, which positively impacts HR and managers, training is easier, quality improves, which reduces the hidden factory, talented resources can be better utilized, so forth and so on.  The impact permeates the entire enterprise.  However, I also believe that the remaining one minute of labor per transaction is reduced further from what was originally measured because of the lower turnover, better training, improved quality, and the ability to better utilize talented resources.  Basically, the business velocity is picking up and it’s tough to measure that in a business base.  Hopefully, the initial “eh” reaction doesn’t win the debate.

Conclusion

I’ve blogged about complexity, hidden costs, and now business velocity, which are all very difficult to identify and quantify, yet extremely important to selling the projects that need to be sold.  If EA teams cannot measure these factors, we need to, at the very least, understand them so we can articulate messages that are compelling and influential, rather than mere hyperbole.  For a good metaphor, I suggest reading “How to Break the Log Jam Slowing Your Small Business Velocity” (Nagel, Jackie. n.d.).


References

Ebizq (n.d.). Ebizq Roundtables. Retrieved from www.ebizq.net/series/19.html

Nagel, Jackie. (n.d.). How to Break the Log Jam Slowing Your Small Business Velocity. Retrieved from http://www.synnovatia.com/business-coaching-blog/bid/183334/How-to-Break-the-Log-Jam-Slowing-Your-Small-Business-Velocity