Monday 30 December 2019

Cloud Events : Part 1


Cloud Events is one of the emerging specifications which tries to promote the interoperability of cloud-native eventing systems. As you know, Event-Driven Architecture promotes loose coupling between distributed systems. Event producer emits an event with context & facts about an occurrence of significance to be consumed by event consumers. The occurrence could be a creation/updation/deletion of an entity like a customer in an application, or it could be something like commit of code into GitHub. The challenge has been these event data model has semantics specific to each application landscape or platform. It was not an issue until we have reached a scenario where we need to assemble business capabilities spanning multiple cloud-native business platforms and technology platforms over which we don't have much control in terms of its design & evolution. For instance, AWS Services has its event semantics, and similar is the case for cloud-native business platforms like Stripe. It has increased the learning curve for the developers understanding those semantics & also operational overhead in terms of configuring routing & monitoring the events produced by each class of event sources. 


Cloud Events specification intends to address the above challenges by dictating event semantics so that
  • Event Consumer could specify event or class of events which they are interested in even before event producers produce such events. For instance, one could specify interest in listening to all events concerning code commit to any source code repository.
  • Event Producer could specify event which they will generate even before the existence of any consumer for the same.


Cloud Events specification focuses on a set of attributes or rather metadata that enables the routing of the event to appropriate event consumers who have subscribed to such events. It does not care much of event payload as such but those metadata elements which impact routing. The specification has called out following one viz. "Function build and invocation process," "Language-specific runtime APIs," "Selecting a single identity/access control system," "Inclusion of protocol-level routing information," and "Event persistence processes" as out of the scope of the specification. The primary intention for these exclusion is to avoid anything which could potentially impact interoperability. For instance, the event model cannot have protocol-specific routing attribute as each protocol it's own has semantics for the routing, so there is no need for duplicating the same. At the same time, the event may be intended for handling by  consumer based on web hook, but due to the unavailability of the same, it's pushed to dead letter queue, which could be picked by some other consumer for further processing. In short, event producer could not foresee how the events could be consumed, so having protocol-specific semantics for routing do not encourage interoperability in such scenarios. Similarly specification do not dictate how event objects are constructed basically how the attributes of events objects are populated. 

In this post, I tried to provide the intent of new specification . I will share more details on its core attributes, extensionability, frameworks supporting the same plus others in the next post.   





Sunday 30 December 2018

Business Events: Need for API Developer Portal kinda one to improve visibility


API platform played a substantial role in enabling self-service integration to a great extent. It has simplified concerns like edge security concerns, message transformations, orchestrations, throttling, monetization, and others in a significant way. For me, discoverability of API through developer portal is a key feature I like the most because of simplicity in approach and at the same time powerful. Why is it powerful? One can look at APIs available for consumption at centralized portal …not only that APIs are fully documented regarding its intent, URI, Request/Response structure plus other relevant attributes. Traditionally its pain to reuse existing business capabilities in a large enterprise as that information regarding real time interfaces were hidden within siloed teams and then discovering and getting service endpoint provisioned for consumption was a pain. Now API platform has simplified it in a significant way. 


I would strongly recommend extending the same model to Business Events also. I am strong proponent of Business Event as the first-class citizen and should be treated in that manner. 
I want to quote from my article published in 2013 here. “Real Time Enterprise is the need of the hour in the fast-changing world. The primary attribute of such an enterprise is the ability to act on the key business event as quickly and effectively as possible. As the latency of action increases, the ability to derive value out of it will decrease substantially and yielding not much value. These types of business events could be deduced by analyzing & correlating the events emanating from business processes/applications. It could be fed back into the business processes to make it more intelligent and context driven. “. 

Good percentage of enterprises has failed to leverage the business event effectively. There are plethora of reasons for the same. First of all, the failure to identify business events and then creating & publishing the right semantics in terms of event message structure & its version management is one of the primary reasons. But I want to stress upon lack of having centralized infrastructure which helps to discover business events & associated semantics is another major factor. In this context, it would be better to have centralized developer portal  (like API Developer Portal) which provide details about Business Events published by applications within business capability , its message structures, end-points from which it could be subscribed. This visibility greatly promotes effective use of business events in a great way 

Saturday 17 November 2018

Microservices, Microservices, everywhere,Not many real microservices to find!!!

Microservices First Approach without assessing its fitment is one of the common mistakes one see across multiple engagements. I doubt if there is any development project which does not have microservices tag associated with the same!!! It is quite understandable when technology paradigm goes through the hype cycle. One of the interesting tweets I saw in this space..."If you can’t build a well-structured monolith, what makes you think microservices is the answer?"  This tweet highlights the importance of design maturity in the adoption of microservices. That's one of the primary challenges which needs to be addressed by any organization is starting the journey of adopting microservices. I want to cover few criteria for fitment of Microservices Architecture in this blog.

Most important criteria for me is to have the ability to allow independent evolution of sub-capabilities of app/product/platform. This is very important when you are developing the product/platform.  I was part of the development of Service Procurement Management product long back. Using the first version of the product, we got the first customer who is fortune 500. Based on that success, we got four significant customers in the immediate quarter, and most of them want to go for a global release. Interestingly those customers came up with their own request for additional features to "sub-capabilities" of that product and so is the timeline for the release. Those feature requests were not only just but key to the evolution of the product. At the same time, there was one case where the customer only wants a subset of capabilities of the product. So we had a scenario where individual sub-capabilities of platform needed to evolve at its own rate and needs to move to production at its own pace. This is one of the ideal cases for the adoption of microservices architecture. Interestingly, this case prompted me in publishing an article on "microservices based development" way back in 2008 in dzone. (Obviously, during that time, the microservices concept was not there in the mainstream, and my initial idea was  built using technology elements of that time...I would put it as an idea under evolution :) )

Another important factor one may want to consider whether sub-capabilities of the application have different scalability requirements. For instance, take a scenario of e-commerce platform, one of the heavily used capabilities is product service when compared to order management services as the number of product views turning into order is comparatively lesser. So in this case, scalability requirements for sub-capabilities of the platform are different, so the microservice architecture is ideal in this case. In addition to the same, microservice based architecture could also help in building high availability and highly resilient applications.

Another important aspect I would like to highlight is that when you are not clear about the domain of "problem space (that's of application/platform)," do not start splitting the app into microservices. Early splitting  will create more headaches than solve the problem as you will not have clear insight into the boundaries of the microservices

In this blog, I tried to cover challenges and things to take care while adopting microservices. I want to highlight the need for a core team with strong design maturity to drive the microservices adoption. As I said in the beginning, organizations which were incapable of building modularized monoliths have a greater chance of failure while adopting microservices.

Sunday 15 October 2017

To blockchain or not!!!

There was a hot discussion with one of my friends which is the primary trigger for this blog. It’s more about blockchain vs. centralized database. Since blockchain is going through the hype cycle, there is a natural tendency to go for the same even if it is not the ideal choice. At the same time, blockchain itself has two major variations in terms of permissionless blockchain and permissioned blockchain. In the case of the permissionless blockchain, it is open and decentralized one which do not put the restriction on who can join the chain. So technically any peer can read from and write to the blockchain. In the case of the permissioned blockchain, there will be a restriction on who can read or write to the blockchain.   (At the same time, many in the industry hold the view that permissioned blockchain is not blockchain in itself. I will blog about the same in the near future). 

Now the question is when one should adopt permissioned blockchain vs. permissionless blockchain vs. centralized database? If the transaction between the entities does not involve any state to be stored, then there is no need for centralized database or blockchain. If it requires the state to be stored, then we need to consider if there is a single trusted party who could maintain the state information on behalf of other entities. In that case, the centralized database should be considered. If no single trusted party could play the role, then blockchain could be considered. If all the writers to blockchain are known but not trusted ones,  permissioned blockchain could be considered. But if the writers are not known, then permissionless blockchain should be considered. 

Please note that performance of centralized database far ahead of the blockchain. The permissionless blockchain like  Bitcoin could handle only five transactions per second, but at the same time, the interbank Visa system can handle around 2,500.  

Above one is the quick and dirty check to assess the fitment of blockchain in your scenario. But at the same time, innovation possibilities on top of blockchain are unlimited, so it's better to look at blockchain from the business perspective also. So, it is recommended to look at a business angle for the adoption of the blockchain. There are many industry materials in this evolving area. HBR and BCG have published many articles in this space which will provide you the business side of the blockchain and its worth the read. 


Sunday 11 December 2016

Mind Mapping (Not Traditional Drawing One) - Important Tool in Design Thinking


In a typical design thinking approach, it includes four stages viz. WHAT IS, WHAT IF, WHAT WOWS and WHAT WORKS. WHAT IS stage involves collecting data about AS IS state (current state) and pain points from multiple perspectives like end user’s perspective of the existing offerings, challenges with existing IT landscape. That represents one of the important steps in WHAT IS part of Design Thinking. Despite having done all the ground work, solution choices were sometimes hijacked by most influential stakeholders and those options in good % of the cases are very biased and preconceived options without looking into collected data which represent the ground realities.
I believe above challenge would have been avoided if design criteria are defined properly for WHAT IF stage and then using the design criteria to act as governance mechanism. This process should be agreed upon upfront during the beginning of the engagement. I have seen one technique, Mind Mapping, which I have picked up during my Coursera course on Design Thinking from Darden University. It is simple and elegant technique. Following section covers the application of the tool in brief using a fictitious scenario. I have added my own flavor to the approach.
Let’s assume that one of your customers is facing higher customer attrition. Things like long time required to fulfill the customer order, rude behavior of the customer service professionals and other factors are perceived to be the reasons. Now you are engaged to analyze the situation and provide the solution to address the same. Obviously you will start with the groundwork in terms of collecting the details by talking directly to end customers, various stakeholders like customer service personnel in the client’s organization whose actions directly or indirectly impact the end customers. Once the data collection is completed, it is time to apply mind mapping technique in the following manner to define the design criteria for WHAT IF stage.
  1.  Organize & structure the collected data into following forms: Power Points, Visual representation of Customer Journeys, Value Chain, Pain Points and related ones. Bring in UI experts to create visual representations of customer journey maps, paint points and other related ones. The intent is to create simple representation of the data collected.
  2. Now take large conference room at customer location and arrange those elements in four sides of room in logical flow say from marketing to order to after sales customer support.
  3. Identify stakeholder groups who would be participating in the session. Include stakeholders Business Side, IT Side, Support Functions like Customer Support Services, and representatives from end customers. Form four to five teams each having representation from each stakeholder group. The team with almost equal representation from various stakeholder groups will bring in different perspectives to the table which will help in bringing the right balance in terms of defining solution criteria.
  4. Invite twenty-thirty people from various stakeholder groups for one-day session. At the beginning of the session, explain rules of the session and how the process will be carried out. All of them should be asked to go through the set of data captured in four sides of the room and to note down minimum five important data points per participant. This could go for one-two hours depending on the amount of the data provided.
  5. Once done, each one should be asked to sit with their team and go through their list.
  6. Each team should be asked to name key themes from the data points they collected. As a moderator, you should note those themes and supporting data points in the main board.
  7. Once it’s done, each team should be asked to analyze the data points in each theme to identify any pattern and also those across the themes.
  8. Now ask individual team to use those themes and patterns to define the design criteria as if anything were possible.
  9. Once design criteria is defined by individual team, ask them to look at other’s options, discuss and then as a single team to create master list of design criteria for WHAT IF stage.

During this exercise you will observe that participants have a tendency to get directly into solution without properly analyzing the data in hand.  As a moderator, you should ensure that nobody to discuss solution options during the mind mapping session. 

Monday 9 November 2015

Healthcare Industry: Inevitible Move from Volume Based Care to Value Based Care

Recently I have been part of a interesting healthcare assignment. It prompted me to look at healthcare domain much more closely to see the latest trends in that space. Like always, HBR articles are one of trusted source of information in this space. One of the significant observation is how innovative healthcare provider are moving from volume based business model to value-based healthcare. Traditionally business model of healthcare providers like the hospital were fundamentally based on the volume business that includes revenue based on number and type of procedures performed. But now payers are also pushing the providers for reimbursement based on the quality of care provided. It is primarily value based model. In the short term, it will have a significant negative impact on the short term profit for the providers.

HBR has provided few cases where the value-based care model is leveraged. Lets take the case of Mayo Clinic in cancer treatment case. The patient with breast cancer may need to undergo an operation for partial or complete removal of the breast. In normal cases, the tissue is sent for analysis after the operation to confirm that if there are any residual cancer infected areas. It normally takes one to two day to complete the analysis.  But in the case of Mayo clinic, they sent to their pathology lab during the operation itself and got it verified within half an hour. It reduces the risks of repeat surgery drastically. As per HBR report, it has avoided 96% of repeat operations. In the short term, it will increase the cost of operation for the patient, but it reduces overall medical cost. For the provider, it cuts down the revenue generation opportunity in terms of repeat operations & associated procedures charges and hospitalization charges.

There is another case where hospital enforced pre-authorization requirement before doctors could prescribe for any radiological procedures like scan. It is basically to force the reuse of  the scan reports if the patient has already undergone scan recently in the same hospital or another hospital.  It will ensure that patient need not go through the same procedure again. It may look like a loss of revenue for the hospital.In above both cases, providers are playing innovative value-based care strategy. On a long-term basis, it will help in attracting customers who are becoming more and more value conscious. It is a sustainable strategy as the industry will inevitably move from "fee for service" model to value-based healthcare model.  But to execute this strategy, the providers
should have expertise on the entire gamut of care from prevention to intervention, which is supported by advanced IT Systems (eg-healthcare analytics engines, e-medicine, advanced EHR systems). One of the important element for executing this strategy is the ability to build a strong relationship based on principles of higher professional pride & higher ethical ground with doctors, support staff, other providers, social organization and govt. agencies.

Monday 7 September 2015

Containerization & Microservices


One of the interesting aspects of microservices is its ability to deploy each microservices group independently unlike a monolithic application. It provides the ability to evolve each capability implemented as microservices independently based on business demands. At the same time, it also provides the opportunity to handle scalability requirements of individual capability independently. Now, this brings the question of deploying those services. In this context,  I believe Microservices and Containerization concept is a match made in heaven :) Let's explain the same in detail

Now lets start looking at deployment options we have. Whether we will be deploying individual microservices group in each physical machines? Obviously not as it is a costly proposition. Then next obvious choice is traditional virtualized environment. Virtualization allows you to split up the physical machine into separate hosts and each virtualized host capable of running guest OS and application on top of it. So this means we could split the physical server into any number of virtualized hosts. Yes and no. Yes, we could split but to a certain extent. All type 2 virtualization has host OS and hypervisor running on top of it. The role of the hypervisor is to map resources like CPU, Memory from virtual host to physical host. In each virtualized host, it will have guest OS with their kernels. Each virtualized hosts are separated from one another and also from physical hosts by hypervisors. To perform those activities, hypervisor also consume CPU cycles and memory. As we add more and more virtualized host, hypervisor takes more of those resources to perform its tasks. Obviously beyond a certain point, the performance of overall infrastructure starts degrading drastically.
Now let’s look at another light weight alternative in terms of Linux Containers. In the case of Linux containers, it is lightweight as it doesn’t need the hypervisor to split and control each virtual host. But it uses the funda of creating separate process space in which other processes live. So each container is effectively subtree of the system process tree and is given physical resources like CPU/memory/others allocated for them. Since we don’t need hypervisor, we save a lot of resources that make it possible to provision more containers than typical virtual machines.  At the same time, Linux container is faster to provision than full-fledged VM images.  Docker is one of most popular lightweight container built on top of this concept. Instead of Linux container, they have implemented their own "libcontainer" that make it possible to run container also on top of the Microsoft OS also rather than getting limited to Linux OS.  Docker's promise is the ability to run anything(if app can run on any host, it can run on container) and run anywhere(cloud/virtual/physical). It also supports the concept of Docker image repository where tested containers could be pushed...basically Build Once Test It Push It to Container Repo and Then Pull It to Run Anywhere. From the repository, container images could be deployed to Dev/QA/UAT/Production environment without worrying about the environmental dependencies and other inconsistencies across the environments. If there is any change needs to be made, it is made to container in terms of replacing it with newer container image in no time.. (Docker also provides the support for syncing diff alone).

Above capabilities make it an ideal for for Microservices deployment. Containerization is a proven concept and could be adopted without any risk of being an early mover.