Saturday, December 3, 2016

Topic 7: Evaluating Emerging Technologies, Innovations & Trends


For this reflection, it was difficult to find fault with the five myths that were presented in the article. To recap, those five myths are:

  • Myth No. 1: Innovation Just Happens
  • Myth No. 2: Innovation Only Happens in R&D
  • Myth No. 3: The Best Innovation Comes From Inside
  • Myth No. 4: The More Ideas We Generate, the Better
  • Myth No. 5: We Have Lots of Smart People, so Innovating Will Be No Problem

The only exception I would take with the list is that there should be a sixth myth added to the list.  That sixth myth would be “Innovation Needs to be Big and Game Changing”.  In reality, innovation should come in all sizes, from small to large impacts, and from entirely new lines of business or products to incremental improvements with existing products and processes.

It’s important for organizations to seek the small ideas that lead to incremental improvements and make sure these ideas are able to see the light of day.  A Lean Six Sigma team traversing throughout an enterprise inspecting and leaning out processes is, arguably, innovating.  A lot of the ideas that are identified and brought forth through such low-level initiative often requires resources from IT to realize the gains.  This where it becomes important for an enterprise to make sure they are enabling not only “big idea” innovation, but also the small ideas.

Sometimes, the easiest way for a business to significantly increase revenue is to wring out a few cents or dollars from a high-volume, well-established business model.  Conversely, sometimes businesses create unintended drag and inertia by pursuing new business models and products that distract resources and creates undue complexity.  The point being that thinking big is good, but not always better.  


With somewhat serendipitous timing, the CEO of the company I currently work for announced an innovation initiative in her bi-annual address to the enterprise.  While I have no insights into what this initiative will actually look like, this “Six Styles…” article looks like it’s worth reflecting on in an attempt to speculate what style a $60 million per year, 500 employee, real estate law firm might adopt.

For the first decision point on page three, “drive or enable”, the firm is too small for its innovation initiative to enable other parts of the enterprise to innovate - in reality, this initiative will have to be IT-driven so there is little choice but to go down the “drive” branch.

For the next decision point, “lead or respond”, the CIO and the company founder are both creative thinkers with very strong personalities, which does not exist elsewhere in the enterprise.  For this decision point, I see the CIO and founder leading rather than responding to others in the business.

For the last decision point, “adapt or invent”, the safe bet is on adapting as opposed to inventing.  Real estate law firms (more specifically default management law firms) are forced largely to follow the lead of their large clients who happen to be major banks and lenders.  If there’s any inventing going on, it will likely be driven by the large lenders and it will be up to the law firms to adapt.  So, after traversing down the tree, it looks like our style will likely be that of a Navigator.

One final note.  There are areas of the law where we may be bolder and take on more of the Scholar style.  The law is notoriously confusing and often out of the reach for the common person.  The area of "invention" that gets talked about a lot is the creation of business models that makes the law more accessible and affordable (via the Internet) without the need for an expensive, personalized attorney relationship. These legal models are already in existence, but they only scratch the surface of the world of law, so maybe we’ll do a little inventing along with a lot of navigating.


This reflection will explore how Atlassian, a software product company, built an innovation culture and how well some of their practices would fit with a small to medium sized, non-technology based company.

Under the broad category of “Keeping High-Performers”, a best-practice of allowing your developers to de-stress was described.  It says that giving developers time away from the grind will help them meet their deadlines without the feeling of a death-march.  There really isn’t a unique angle for SMBs on this particular practice.  I am, however, skeptical that pulling people away from their work while still requiring them to meet strict and stressful deadlines will do anything other than breed cynicism.  Beyond that obvious conflict, offering employees a diversion from the usual grind sounds like a good idea for all businesses.

Practicing radical transparency also sounds like a practice that could, and should, extend to any type of business.  In fact, there is so much talk about transparency these days, it would seem to qualify as one of the latest business trends.  Since many SMBs are founder/family owned, true transparency may be more constrained than in public businesses, but regardless, transparency is a good thing across all businesses.

The 20% Time and FedEx Days sounds like a good fit for a software product company that is looking for innovative features to incorporate into their product lines, as exemplified with the Confluence Widget Connector.   However, an SMB (or any sized business) that is not IT-centric will likely find it impractical to apply this practice.  What will be more practical is giving the software development staff some routine amount of time for them to set their own priorities to address the technical debt that builds up over time.  The development staff at SMBs is rarely afforded the time to address the IT complexity that builds up over time and there is usually some compelling efficiencies to be gained by the IT staff if the technical debt is addressed.  It’s also intrinsically rewarding for the developers when they are afforded the time to address the debt as this ultimately makes them and their teammates more productive and able to deliver higher quality.

The practice around organic process optimization is something that not only maps well to SMBs but to all organizations in general.  I really didn’t see this as very novel.  I can’t imagine any software development teams that are not allowed to organically define and adjust their internal processes.  

The primary best-practice missing from this analysis is the building of a strong partnership between IT and the business, which in this case, is because the business is a software development company.  For any other organization, that will have to be the most important key to innovation.

Saturday, November 5, 2016

Topic 6: Understanding the Emerging Business Architecture Layer


For this reflection, the authors provide not only five core principles of a successful business architecture, but they augment it with five addition items of importance.  There are two key points from the article’s bullets (from both the set of five core principles and the additional list of points called the “Other Big Five”) which will help both Enterprise and Business Architects get to the core of what a business architecture is.  

Before getting into the two key points, I want to praise the authors for presenting their points in a simple, easy to remember manner.  The challenge I have with disciplines such as Enterprise or Business Architecture are with many of the terminology definitions - it seems simplicity is often forsaken for protracted and convoluted definitions.  For instance the “Business Architecture” definition on Wikipedia states:

Business architecture is defined as "a blueprint of the enterprise that provides a common understanding of the organization and is used to align strategic objectives and tactical demands.”

The above definition would be more helpful if it simply stopped at “a blueprint for the enterprise”.  Everything else seems excessive and even confusing.  How many people really know what “aligning strategic objectives and tactical demands” really means?  It’s this kind of verbose excess that made me appreciate the above article.  The core principles and additional items are presented simply and succinctly and are easy to remember.  Supporting detail is, of course, provided within the document.

The first key point is the absolute exclusion of technology from the business architecture.  The first of the five core principle bullets states that business architecture is about the business.  It is not about IT and technology.  Technology should not be reflected at all in a business architecture.  Very closely related to this core principle is the third bullet from the set of the “other big 5”.  That bullet states that a business architecture does not directly feed software development. If a business architecture contains processes on how a software application is used or even contains detailed, low-level processes, it’s likely to fail.  A good business architecture should simply help people understand the business and, of course, help plan for change.  

The second key point comes from the second of the five core principles.  This bullet states that business architecture is not prescriptive.  In other words, there is no single recipe for enterprises to follow.  Seeking input and knowledge from other organizations is important, but we must remember that what we learn is not meant to be applied verbatim.     

If we remember that a business architecture is a blueprint for a business, if we keep technology out of it and avoid low-level processes, if we remember that every organization is unique and thus should expect to develop a business architecture that looks like none other before it, then we will have a solid and simple understanding of business architecture.


In my Enterprise Architecture Foundations I class at Penn State, I recall learning how organizations cannot exist in a vacuum and instead must exist in an environment.  Furthermore, the teachings went on state that an organization is defined by how it interacts with its environment, not what goes on within the organization.  This class material was presented as long-held and accepted beliefs on what an organization is and not some new concept.  

What is new though is how interactions with the environment are changing and the rapid pace of that change.  It is this pace of change which requires a larger role to be played by Business Architects.  The biggest changes are with 1) customers, 2) business partners, and 3) the workforce.  

With regards to customers, the importance of knowing your customer and listening to them hasn’t changed, rather, it’s the ability to gain immediate feedback and to also gain unsolicited feedback from them that is relatively new.  In the past, organizations may have established small focus groups, conducted surveys by mail, or relied upon product registration cards to know their customers.  We know the customer has always mattered, it’s just that now, the feedback loop needs to be continuous and instantaneous.
With regards to business partners, technology integration advancements have made it easier for organizations to establish partnerships that open the door to efficiencies and new business models.  However, more partnerships will also lead to increased conflict and business model disruptions as organizations will generally be looking out for their own self-interests.  The less an organization controls, the more disruption the organization will have to plan and be prepared for.  

With regards to the workforce, the trend toward freelancing and bidding small pieces of work out will come with lots of legal preparedness and challenges around confidentiality and ownership (of the outputs and intellectual property).  Also, the anytime/anywhere workforce comes with many challenges on measuring value and productivity.  In both scenarios (freelancing/bidding and anytime/anywhere) it’s important to know what your expectations for value look like. If an organization cannot grasp that, then these new trends for engaging the labor force will likely not work out well.  


Business Architecture, at least to me, seems like an initiative which businesses engage in because they feel like they’re supposed to even though they don’t fully understand what it is.  Because of this perception, I’ve chosen to reflect on the above article, as it attempts to clarify what business architecture is and what some of the misconceptions are.

The first point that caught my eye was the section entitled “Business Architecture: Where Does It Come From?”. The authors state that the business architecture should come from the business, which seems obvious on the surface.  However, the authors point out that many times the business architecture comes from third-party consultants and from IT.  Leveraging third-party consultants would indicate that the business does not want to dedicate their own resources to the initiative, which is a sign they don’t take it seriously and are doing it because they feel like they’re supposed to.  

In the case of IT, I see a different dynamic at work.  More often than not, I would expect that it is IT that brings the idea of a business architecture to the enterprise leadership.  Also, more often than not, I would expect that it is IT that incubates the concept and gets it moving forward, as IT, while not possessing the same depth of knowledge of individual departments, often has the most breadth of knowledge in the business.  

If a business can launch a Business Architecture program that is separate from IT, then that is certainly a good thing.  However, if a Business Architecture program is launched by IT or IT is  leveraged by the business, that isn’t necessarily a bad thing as long as the goal is to spin off this new competency from IT so that it can truly operate as a business-focused organization separate from IT.

The other interesting point comes from bullets #5 and #9 on the list of misconceptions.  Bullet #5 debunks the notion that a business architecture is project or business unit specific.  The takeaway is that the more narrowly defined the business architecture is, the less value it has.  Couple that with bullet #9 which debunks the notion a business architecture isn’t needed because businesses already know who they are and what they do.  The key word used in the debunking was “fragmented”.  Business knowledge is very fragmented and there are very few employees with cross business unit knowledge and fewer still with a full enterprise perspective.  If a narrow business architecture is of dubious value, then, to provide a business with knowledge it doesn’t already possess, the business architecture must truly be enterprise wide.


There are several other valuable points in this article that puts Business Architecture into better focus making it worth a full reading.

Saturday, October 22, 2016

Topic 5: Enterprise Security Architecture


As a law firm, my employer maintains a large amount of personally identifiable information (PII) along with vast amounts of our client’s (banks and lenders) data and digital documents.  Our desire to move as much of our datacenter to the cloud (along with all this sensitive data) as quickly as possible has led to many debates about PII and sensitive client data in the cloud.  These debates have to consider the regulations governing the mortgage industry (our area of practice) and law firms along with our client’s wishes and demands.

With this reality in mind, I wanted to dig deeper into the world of encryption for data stored in the cloud, which brought me to the above article for my first reflection.  This article offers several fascinating statistics and insights regarding the lack of security around cloud storage.  This particular article focuses on cloud-based applications and how few application service providers encrypt their user’s data at rest and how much sensitive data users actually upload.

The problem that I’m more interested in is with cloud-based file system storage.  When offerings like Google Drive or Dropbox are examined, we see that they encrypt user data at rest, but the rub is that the user does not control or possess the encryption key(s) - Google and Dropbox handle that.  Would this be good enough when it comes to ensuring our clients (and regulatory bodies) that their data is safe?  I don’t think so.

The only way we, or any other organization, as caretaker of sensitive information, can promise anyone that the data is safe is if it’s encrypted at rest and we control the encryption keys, not a third-party.  Even if the likes of Google and Dropbox have better security controls than my small-to-medium sized law firm, they are still a third-party - no assumptions can be made and no assurances can be granted for their capabilities.

After a quick look at Amazon’s AWS offering for cloud storage, I saw that they offered encryption at rest (no surprise) along with the option for the customer to control the encryption key(s).  I’m not sure what the banks think, but if their data is encrypted and we possess the keys, then it simply shouldn’t matter where the data is actually stored.



I couldn’t help reflecting on this article, because, as technology-oriented articles go, this one is quite sensational.  Actually, the first three flaws are more mildly surprising than sensational, at least for someone who is not a security expert.

The first flaw essentially says that PKI is too complicated and thus prone to mistakes.  This is a perspective I was not aware of, as many companies set up their own PKIs and apparently, most do it poorly.  Of course, the article is written by a PKI consultant, so he obviously stands to gain by convincing businesses that they cannot set up a PKI themselves.  

The second flaw is an extension of the first, where the author states that the complexity of PKI causes numerous PKI errors that often reveal themselves to a user trying to reach a supposedly secured website.  In a more secure Internet, the errors would be few and when there are PKI errors, access would be blocked.  However, what happens in the real world is that our browsers allow access, presenting nothing more than a warning, which is usually ignored by the user.  

The third flaw really isn’t a flaw of PKI at all.  To impune PKI by stating that it does address all other technology security risks that a business faces doesn’t make much sense.  PKI is primarily used as an integral component for encrypting data in transit, and with the exception of flaws one and two, it really does that quite well (and has been doing it for quite some time also).   

It’s the fourth flaw that I found extremely interesting, very sensational, and somewhat scary.  For the fourth flaw, the author states that “eventually, PKI will stop working forever”.  One of the few things I know about PKI is that it’s based on asymmetric key pairs, where a file encrypted with key “a” can only be decrypted with key “b” and vice versa.  Knowing the key used for encryption does not allow someone to decrypt the file, which is the fascinating point about this technology (developed back in the 1970s).  The secret to asymmetric key pairs involves large prime numbers and some very complicated math, so complicated that even today’s computing power cannot be leveraged to crack the encryption keys.  However, here is where the author makes it interesting.  He says:

One day, the incredibly hard math involving large prime numbers won't be so difficult to solve anymore.  For example, one of the biggest promises of Quantum computing, whenever it finally gets perfected, is that it will be able to immediately break open PKI-protected secrets. Sometime in the near- to midterm future, useful Quantum computers will become a reality. When they do, most public crypto will fall.

After delivering the depressing message that the very foundation of a secured Internet will one day crumble, the author at least offers a modicum of hope by saying that quantum cryptography will be the answer.  The author then steals back that modicum of hope by stating that the quantum computers needed for quantum cryptography will be extremely expensive and beyond the means for the average Internet user.   

Hard to tell where this is headed, but if there’s any likelihood to the author’s claims, then I would guess that in the future we might be shopping Amazon from some centralized Internet cafe offering quantum computers, at least until we can afford our own.


Reflection 3: Quantum computing 101

I wasn’t expecting to reflect on quantum computing, but the previous reflection proved too fascinating to just drop the topic.  Even though there’s a lot of interesting material in this article and it’s presented in a nice “101” manner suitable for someone reading about quantum computing for the first time, what’s really nice is that it explains how quantum cryptography works, at least at a high-level.

Rather than trying to summarize an already nice summary, I’ll call out a few of the interesting elements of quantum cryptography.  The first interesting element is that the encryption key is based on the polarization of photons, which is completely random and unpredictable (crazy quantum stuff).  What’s really interesting about this is that the “key” is physical in nature, unlike today’s keys which are based on a mathematical problem.  Because of this physical (and random) nature, even the most powerful quantum computers of the future will not be able to break the key because it’s not mathematical.

Even if the keys cannot be cracked, one would think they could be intercepted in transit, which is another really interesting part of quantum cryptography.  While the photons are being transmitted to the parties wishing to initiate encrypted communication, they cannot be sniffed or copied in anyway, as such an innocuous action will cause their polarizations to randomly change (more crazy quantum stuff), causing the keys to not match, and thus, forcing the two parties to retry until their keys match.

The last interesting point about all this crazy quantum stuff is that for the field of cryptography, it is simply a new means for distributing synchronous keys (shared secrets).  Once the quantum distribution of photons has successfully been concluded, everything reverts back to technology we are all comfortable with today, that being traditional data encryption using a shared-secret key.  As long as those photon-generated keys are not stored and reused, all communication should be secured.

What I don’t understand at this point is who will generate these photon-based keys and how will they be transmitted to the parties involved.  Will it be technology for the masses or for the few?  Will the average Internet surfer be required to own a quantum computer to receive a quantum key?  The author from my second reflection implied as much when he expressed concern that quantum computers are not expected to be priced for the average consumer anytime soon, if ever.  

This leaves me wondering what the future of e-commerce and B2B integration looks like.  It sounds like there’s going to be a period of time in which a small number of quantum computers will exist for big business, governments, and James Bond villains, while unavailable to the rest of the world.  This means that traditional Internet encryption using PKI could be easily cracked by anyone possessing a quantum computer while the masses will not yet have access to quantum keys as the antidote. That sounds scary and certainly would be a huge step backward.

Saturday, October 8, 2016

Topic 4: Enterprise Technology Infrastructure Architecture

For this week’s blog posting I’m going to reflect on operating a data center as well as some of the current trends impacting data centers. A couple of blog postings ago I wrote about how SMBs are anchored down by maintaining their own data centers, stunting progress with the core business and other areas of IT such as SOA implementations and analytics, so it seems worth exploring data centers a bit deeper.


For my first reflection, I wanted to find a real-world example of a business moving entirely to the cloud, ideally an SMB.  In my research, I came across this article from last year on how Netflix finished moving its entire business to Amazon’s cloud platform.  Netflix is far from an SMB, but the article was too compelling for me to ignore.  

The second paragraph immediately answers the important question of “why?”.  In 2008 Netflix experienced a serious outage with their data center, which is probably the primary impetus behind most cloud migrations, especially for SMBs where, I believe, proper investments in reliability are lagging.  Many businesses will examine the costs and benefits and cautiously contemplate a move to the cloud, but as soon as there is an outage, that cautious contemplation quickly becomes an irrepressible demand, which speeds matters up.  

The article, unfortunately, doesn’t provide any cost comparisons between the Netflix operated data centers and the cost of operating in the Amazon cloud, but direct, measurable costs are probably not going to be the deciding factor.  It’s going to be the indirect costs associated with outages that will be the deciding factor, along with the realization that running a data center is simply not a core capability of most businesses.  Focusing on the core and offloading or outsourcing most of the rest is a common and sound strategy these days.

The only piece of their home-grown infrastructure that Netflix did not move to Amazon’s cloud was their content delivery network, which essentially consists of all the video caches around the world, allowing their content to be close to the customer.  There are plenty of edge computing companies, including Amazon, that could handle this for Netflix, so one can only assume that Netflix considers their content delivery network a core capability and competitive advantage - something they do better than others - while the rest of the IT infrastructure was not.
   
It’s ironic to note that as I was writing Reflection 3 (below), Netflix had a serious worldwide outage for at least one hour.  It would be interesting to know if it was Amazon’s doing or something self-inflicted by Netflix.  Regardless, I don’t think Netflix will be reversing direction any time soon.

In looking for articles on data centers and moving to the cloud, this article, while not providing insights on the value of moving to the cloud, was still too interesting to ignore.  I’ve always marveled at Google’s ability to do what they do, especially given the performance problems of the small, custom-built application that manages the back-office operations of my current employer.

What amazed me was the revelation that Google now designs their own hardware to address their unique performance and load requirements rather than use commercially available options, which Google deemed unsuitable for their needs.  

At first, I was curious why Google hasn’t commercialized their networking/hardware innovations, but I quickly concluded that they must consider such innovations a competitive advantage and, as such, choose to keep their technology private.

More importantly, I wondered how networking giants like Cisco could be out-innovated in their core area of expertise by a company who’s core is not in networking and hardware. Is it because commercializing hardware and software for a mass-market requires compromises in order to achieve certain cost and utility goals?  I’m going to assume Cisco knows how to do what Google did and probably figured it out before Google did.  Cisco must have concluded that the capabilities were too specialized and/or too expensive for the commercial market - if I were to give them the benefit of the doubt.

Conversely, it could be that Cisco is withholding innovations that would disrupt their own market segment.  That’s not unheard of, albeit very risky, as I have to believe there are numerous other potential disruptors, that unlike Google, would be happy to commercialize such innovations.


One of Google’s innovations (from the previous reflection) was to build software-based switches using cheap hardware.  This sounded very similar to the recent trend in data centers called hyperconvergence, which is the third article I want to reflect on.

The hyperconvergence trend simply takes the software-based virtualization paradigm for compute environments and adds storage to the mix.  This allows data centers to simplify and further extend the cost savings and flexibility that virtualized compute environments have long delivered.

The article compares and contrasts the hyperconvergence trend with its convergence predecessor.  Both trends try to commonize on hardware, however, the convergence trend still saddles a data center with separate storage and compute devices, which in turn, requires an emphasis on expensive networking hardware.  By contrast, hyperconvergence melds storage and compute platforms together, managing them from the same software-driven virtualized environment, in turn, deemphasizing the network and the need for expensive networking hardware.

It is interesting how strong a resemblance the hyperconnectivity trend bears to Google’s data center innovations from Reflection 2.  In Reflection 2, I pondered why Cisco (or others like them) hadn’t beat Google to the punch with such software-driven data center innovations.  One of my speculations was that it might be too disruptive to their lucrative market.  I find it interesting that hyperconvergence, with its emphasis on virtualized storage and compute environments and the de-emphasis on networking, seems to support that speculation.  There should be no need for the networking giants to worry though as I’m sure the Internet holds plenty of opportunities for growth.

Saturday, September 24, 2016

Topic 3: Enterprise Data Architecture


The first article I want to reflect on lists the six principles of a modern data architecture.  I almost dismissed this article as a list of banal bullet points which we have seen many times over. However, it was the last bullet point (#6 Eliminate Data Copies and Movement) that grabbed me.  

I’ve always focused on establishing single points-of-entry (into the enterprise) for critical data entities.  Once these single points-of-entry are established, a robust SOA can allow the rest of the enterprise to request the data, rather than creating redundant data capture capabilities which always lead to data inconsistencies throughout the enterprise.  

My thinking also assumes that systems requesting data from the point-of-entry system(s) will store the data locally for performance purposes.  Even though the data will be replicated, at least it will be single-sourced and consistent.  It is that very assumption of mine that made principle #6 so compelling.  It’s a great vision, eliminating data redundancies and movements, but has technology progressed enough to make it a data architecture principle that all enterprises can adopt?  

The same observation was noted in Andrew Richard’s blog posting from September 11, 2016, where he notes the usual and customary approach of replicating data for performance purposes, even when a robust SOA is available to share data.  Mr. Richard goes on to ask if there are any good reference architectures that address the replication problem.  The short write-up for principle #6 mentions a product called Hadoop which I had never heard of before and which is where my second reflection is headed toward.


After Googling “Hadoop” and reading various articles, I quickly learned that it is an open sourced, big data platform.  I will admit that I am not well versed with the big data trend.  My high-level understanding is that it is largely pertinent to data warehousing and analytics.  However, for the question raised in Reflection 1, what I’m really interested in are traditional relational databases used by OLTP applications.  Can Hadoop (along with a compatible RDBMS) and big data reach beyond the data warehousing and analytics world and solve the replication problem in the OLTP world?  Can a business actually have a single “big data” OLTP database serving much, or maybe even all, of the enterprise?

Those questions brought me to the second article I want to reflect on.  This article asks and answers four different questions about Hadoop.  The first question sets the stage quite well...
Hadoop is primarily known for running batch based, analytic workloads. Is it ready to support real-time, operational and transactional applications?
This is exactly what I want to understand, so let’s look at the second question which asks how enterprises can take advantage of Hadoop.
How can enterprises, specifically in the Retail industry, take advantage of a Hadoop RDBMS?
The author responds to this question with a scalability and performance answer, stating that enterprises can leverage Hadoop and Splice Machine (Hadoop-compatible RDBMS) to provide scalability and performance for extremely large RDBMSs.  So, scalability and performance is examined while eliminating data replication and movement is not.  My hope that this article was going to address principle #6 from my Reflection 1 article seems very much in doubt now (even though principle #6 specifically mentions Hadoop).  Let’s take a look at the third question.
Can we run mixed workloads – transactional (OLTP) and analytical (OLAP) – on the same Hadoop cluster?
Now I see how this article is addressing principle #6 and why Hadoop was mentioned in the first place.  Remember principle #6 is titled “Eliminate Data Copies and Movement”.  I immediately interpreted that as a call to eliminate redundancies between OLTP databases and the movement of redundant data between OLTP databases.  However, this third question reminds me of the more likely intent, which is the elimination of data redundancies and movement between OLTP and OLAP environments.  I don’t know if that’s what the first article had in mind with principle #6, but I think it is.

The fourth question simply explores the return on investment for implementing a Hadoop/Split Machine solution for massive RDBMs that mix OLTP and OLAP capabilities.

Even though article 1 was not specific in what it meant by principle #6, and even though article 2 did not actually explore the possibility of some enterprise one day building a single enterprise-wide OLTP database, I can’t help but think the enabling technology exists with Hadoop and Split Machine - the biggest challenge with such a crazy concept will likely be organizational.


The third article I want to reflect on shifts gears to a different topic, which is treating data as an asset.  Among the Lesson 3: Enterprise Data Architecture reading material for EA-874 is an article from Gartner titled “Managing Information as an Asset: Enterprise Architects, Beware!” which describes this “information as an asset” concept as treating your data with care, ensuring consistency, accuracy, accessibility, utility, safety, and transparency. Upon reading the title, I felt I wanted to include the article among my three reflections.  Upon reading the article itself, I no longer felt it was worth reflecting on - businesses needing to treat their data with care like it’s an asset seemed too obvious and not very interesting.

However, just a few days later I came across the Reflection 3 article from above.  This article actually does talk about assigning a dollar value to a company’s data, which is a new and interesting concept, and one I want to reflect on here.  

My initial position is that data valuations don’t belong on the balance sheet as its value is already reflected in intangible asset categories such as goodwill, brand equity, and intellectual capital.  However, what I learned in reading the article is that this idea of valuing data is not about adding new assets to a company’s balance sheet, but is instead about taking the intangible assets valuation and breaking it down further to assign a chunk of that valuation to the company’s data.

The important realization for me is that the total value of intangible assets will still be calculated in their traditional ways.  Here is a very simple approach:

Intangible assets = market capitalization - (year-end sales + tangible assets)    

The article isn’t saying businesses are now expected to mess with their balance sheets and inflate them with valuations of the data they possess.  It’s simply saying that businesses need to take their intangible assets valuation and figure out how much of that valuation is attributable to their data.

So why is this important?  The answer is risk management.  The higher the data valuation, the greater the exposure to such things as cyber attacks and system outages (such as Delta Airlines recent outage).  Once investors, business leaders, and IT leaders have a clearer picture of risk exposure, then, it stands to reason, better decisions will be made to mitigate the exposure associated with safeguarding a company’s data.

Saturday, September 10, 2016

Topic 2: Enterprise Application Architecture

Since the topic for this blog posting is Enterprise Application Architecture and since the title of this blog is Enterprise Architecture for SMBs, it seems fitting to analyze one of the critical elements of an Enterprise Application Architecture, that being SOA, for small and medium sized businesses.


The first article which I want to reflect on explores the position that SOA isn’t necessarily important for SMBs.  The reason offered is that SMBs do not have the application complexity and diversity that large enterprises have.  I currently work at a $60 million business which has two primary lines of business.  Each line of business has a primary back-office system (one is custom-built, the other is commercial) along with shared commercial finance and CRM systems.  That’s about it - definitely not complex, so the article has my attention.

The article references an informal survey of mid-market CIOs which reveals that two-thirds feel there is no business need for SOA at their enterprise.  This opinion may be due to an overall simpler IT environment that “works” or it may be due to limited resources (people and budget).  Whatever the reasoning is among these CIOs, I think the informal survey provides an accurate representation.  The article is from six years ago, but having worked at SMBs for the past nine years, I think the opinions expressed in this 2010 article are still holding true in 2016.  

Further into the article, however, the author chooses to pivot from the opinion of the mid-market CIOs and offers some of his own reasoning on why SMBs should care about SOA.  He says:

Of course, the mid-market is the prime candidate for cloud computing, and this is where the value of service orientation will ultimately be realized, on an enterprise-to-enterprise scale. But for applications within the infrastructures of smaller enterprises, SOA is still beyond reach.  

Still, the more a business of any size can service orient, the better. SOA is an extension of enterprise architecture, and EA is important at any level.

I’m in agreement with the author and I think the first paragraph above is very telling.  Even six years after this article was published, I can attest that for many SMBs, running a solid, well-managed data center is beyond their capabilities.  With the data center yoke around our necks, the notion of implementing a service-oriented architecture is accurately stated as beyond our reach.  We need to get out of the data center business, move to the cloud, and then start innovating with our Enterprise Application Architecture.


For my second reflection, I want to remain on topic with SOA at SMBs and get some additional perspectives on why SMBs can, and should, move to a SOA.  The article being examined is actually a series of responses to the question “How Can Small to Medium Sized Enterprises Benefit from SOA?”.  Most of the answers lack substance and depth, but there were a couple of comments that I found very relatable.  In the posting from John Power he states the following:

However, from my perspective the real benefit to SMBs is the possibilities it offers them to integrate their systems with large customers or suppliers.

This struck me in two ways.  First, it struck me for its accuracy.  It has been my experience that invoking a few web services and standing up a few of our own in order to integrate with our suppliers and partners is exactly what many SMBs do for their SOA initiatives (without going much further).  

The second manner in which it struck me was how it reinforces the confusion and hype around SOA.  Integrating with a handful of external businesses through web services in no way constitutes the deployment of a broad service-oriented architecture.  However, I will state that those simple external integrations are extremely valuable to SMBs and they work well, leading many CIOs, I reckon, to conclude “that’s good enough for us”.

The other posting that caught my attention was from Kelly Emo.  She states the following:

Secondly, SMBs will find themselves brought into the SOA world whether they explicitly plan for it or not. Packaged apps and SaaS solutions are rapidly evolving to be architected based on the concept of loosely-coupled shared services...

This claim is also very accurate in how it matches my SMB experience.  It is the packaged applications and SaaS solutions which SMBs utilize, that essentially forces them to eventually learn and leverage the service orientation built into those commercial offerings.  

It seems the ultimate takeaway for me is that SMBs will adopt SOA gradually (even glacially) and very deliberately, avoiding big-bang transformations.  If resistance persists, then, to the point made by Kelly Emo, SOA will likely start deploying itself in the enterprise.


The third article I want to reflect on comes from the Lesson 2 reading material for EA-874.  This article seemed like it would be a fitting cap to my SOA-themed blog post.  I expected the article to largely espouse the virtues of service-orientation for building applications. In other words, the author just wants us to replace the traditional locked-in business logic of the middle tier with services that can be invoked throughout the enterprise.

The author quickly reminded me, however, that replacing the locked-in middle-tier with services still leaves us with a three-tiered application.  So, my assumption about the article was wrong and I was suddenly wondering what new trend all the SMBs will be lagging behind on now.

It turns out the author is not actually recommending that we retire the three-tiered separation of concerns between user interfaces, business logic, and data access.  He, in fact, states that we need to continue reinforcing the three-tiered separation of concerns.  All the author is saying, as a natural evolution after implementing a SOA, is that the user interfaces of applications should no longer be homogenous and monolithic - they should be a collection of components supporting many devices.   These user interface components will invoke many business services, and these business services will invoke many data services.  

Keep in mind that all these pieces are still to be architected with the separation of concerns being resolutely applied.  We just need to stop thinking about single applications as a homogenous three-tiered stack and instead start thinking about them as a collection of user components and business services that cannot be easy abstracted as a nice, neat three-tier stack.

The bottom line is that the separation of concerns (user interface, business logic, data access) is not going away, but simple, self-contained applications are. All in all, not too big of a paradigm shift for SMBs to work toward...once we start making more progress on our SOAs.