The missing link in IT management; product design time configuration management
Chances that all elements of a digital product are designed by a single team are probably slim. You better know your product composition at design time and understand the related producer/consumer relations to be in control of your product and manage your value streams.
Introduction
In a previous article, I recognized the need for a sound data model to manage your digital products. In this article, design-time configuration management is proposed as a starting point for such a model and, arguments are given for this choice.
With a background in hardware and manufacturing, I always have been amazed by the difference in handling design data between hardware and software. With hardware, there is a clear need to have a detailed understanding, since you have to order parts to keep your factory going. These parts have to be ordered in time and elaborate systems are used to manage the supply chain. Therefore, hardware companies exercise configuration management to manage and maintain their product designs. This often includes strict identification conventions and change procedures to guarantee integrity- and consistency of all the company’s design information. In that context, we talk about Product Data Management or Product Information Management.
For the IT industry and software development, this is not the case. Apparently, the need is not felt. And to be fair, making software should not be compared to manufacturing. Making software probably better resembles continuous prototyping. In contrast to the motto of mass manufacturing, design once produce many, with iterative development there will be a constant flow of changed product designs. ‘Production’ should be fully automated with CICD.
Also in the IT management frameworks, there is little to no attention for design-time configuration management. What in IT is called configuration management is the registration of the output of the ‘production’ process, in contrast to managing the input, the design. To me, it feels like driving a car by looking in the rearview mirror. If your product design information and your production process are under control, you should know what the outcome is, it should be predictable. I blame this on past separation of duties where things got thrown over a wall and the OPS organization desperately tried to keep things under control and established their independent configuration management process and independent CMDB.
Why design-time configuration management?
Why is this type of configuration management important? As hinted in the subtitle, no complex digital product will be 100% designed by the team operationally responsible. Your product will be composed of building blocks that are designed outside your team. In these days of cloud services and open-source software, probably between 80% and 90% of the items in your design will be design reuse (design reuse in contrast to run-time dependencies on shared services, which is not the subject of this article).
So there will be upstream design items that are input to your product, and these upstream items in their turn may have upstream items, etc. When an upstream item is another team’s responsibility you have a design-time dependency on another team within your company. You therefore may require an (informal) contract (SLA).
When the other team is a contractor, you have a design-time dependency on both another team and another company. You will need a formal contract and you need a formal and structural manner to pass information back and forth. When the other team is a vendor, you have a design-time dependency on another team and company, and you also need a contract, but a contract of a different kind. You will have far less control over the design item since a vendor will target multiple customers. When the design item is open-source-based, that the other team may be elusive and you possibly can not have a contract, even if you want to. You have even less control over the design item.
In other words, having an insight into the composition of your product and subsequently, the responsible parties for the items in your product’s design gives insight into the design-time value stream across both teams and organizations.
As noted before, there is a big difference between producing physical products and software-intensive products. For physical systems, you need to manage both the design value chain and the supply chain of physical items, and for software, this is limited to the design value chain, since there are no supplies. The design value chain is purely the transfer of knowledge and information.
Another big difference is that for physical products the aim is often to stabilize the design to allow a controlled flow of goods during production, while for software products the aim is often to increase the speed of design change. For software-intensive products change is a constant and design is a variable.
And of course, there is another important topic, automation / CICD. More about that later.
Information flows in your value chain
Design-time value streams are about facilitating the flow of information, between the parties involved in the value chain. So it is about knowledge management, knowledge of your companies products, and every piece of knowledge that is required to change, build and run your products and every item your product is composed of. It is about handling the events related to change. There are multiple events and therefore multiple information flows in both directions, both upstream and downstream. The information will be either pulled or pushed based on the origin of the event.
- Designing your product will pull design information from upstream. A very important aspect of this is of course finding candidate items and selecting the items for your product’s design. Are your design decisions based on solid information? Design decisions may lead to contracts with third parties.
- Change a requirement of your product can cause a pull of requirements upstream. Changing requirements may lead to reconsidering design decisions, design reuse, and therefore contracts.
- Release of a new version or deprecating an older version of an upstream item, life cycle management, will be pushed downstream. Failure to respond will lead to your product to be not fully supported.
- Discovering a defect in your product can cause a push of the defect upstream. Failure to respond adequately to defects by upstream parties may lead to reconsidering design decisions and therefore contracts.
- Discovering a major defect in an upstream item leads to the fast release of a new version (a patch) that will be pushed downstream. Failure to respond will lead to your product being at risk. This is similar to life cycle management but more urgent.
- A special case of the above is discovering a security vulnerability in an upstream item. A resolution (a new version) will be pushed downstream. Failure to respond will lead to your products being at risk. Or failure to respond by contractors and vendors may lead to reconsidering design decisions and therefore contracts.
Design-time configuration management
Design-time configuration management is about supporting the exchange of information between DevOps teams and enterprise stakeholders by providing a structure to technical product data. The central elements are design-time configuration items (DCI), in contrast to CMDB configuration items that reflect run-time items. A DCI must have at least the following characteristics.
- enterprise-wide unique identification; When communicating in an engineering context it should be clear what exactly we are talking about. In configuration management of other engineering disciplines, this identifier is called a Part Number. For software-intensive products, where changes are frequent, special attention is required for version identification, e.g. using SEMVER. How to set this up is a topic on its own.
- technology agnostic; The basic setup of DCIs should be technology agnostic. Digital products and the IT industry change rapidly and the exchange of information should not be aimed towards or optimized for a specific technology. Furthermore, other enterprise stakeholders interested in for instance costs and risks should not be bothered with technology.
- cross-reference tool data; Technical product data in practice will be artifacts in tools. Development / CICD tools are mostly functionally oriented and as a consequence artifacts related to a single DCI will be spread over many tools. A DCI should not copy the artifacts, but cross-reference them, such that artifacts in different tools are associated with the same DCI. As said in the previous blog, standards & guidelines regarding the use of tools are very important. This will allow this cross-referencing to be done by convention, limiting the data needed.
- flexible meta-data model; Given the above requirements, where new technologies and new tools can be expected frequently. Technology and tool-specific data extensions should be possible. Therefore, the catalog of DCIs will be accompanied by a type model to accommodate meta-data definition and validation. A well-defined set of types can be used for identifying patterns and cross-DCI architecture/engineering rule validations. A NoSQL / document-based approach is probably preferred.
The primary entry point for accessing the data is of course your product portfolio. For each ‘logical’ product a list of released product versions and WIP versions is maintained. Each product version is represented by a single DCI. These product level DCIs have the product composition defined such that upstream elements can be found. Besides upstream elements, a DevOps team could utilize this model also to describe the building blocks they design themselves. A more detailed description of a potential data model and other reasons to have one is subject to another article.
It should be clear by now that a Version Control System such as Git is not a Configuration Management system as intended here.
Does the IT4IT Reference Architecture misses a Requirements to Publish value stream?
The 2.1 version of the IT4IT Reference Architecture captures the design and development activities in the Requirements to Deploy value stream. It focuses on the products (services) IT delivers to end-users, potentially being customers. However, there seems to be no attention to the ingress of upstream design items. The phrase ‘Requirements to Deploy’ suggest that the creation of the factual product instance (production deployment) is part of this value stream. Upstream teams that produce design items do not ‘deploy’, they publish new versions of design items, that downstream teams can incorporate in their product. Maybe there should be another value stream called ‘Requirements to Publish’ for these upstream teams.
Does the Flow Framework miss the concept of DCIs?
The Flow Framework described in the book ‘Project to Product’ by Mik Kersten identifies an artifact network and a value stream network. The book is not detailed enough to answer if the need is identified to relate multiple artifacts to a single logical DCI and how the relationship is established with ownership. It would be a disadvantage to have each artifact to be linked individually to an owner.
DevOps and CD requires Self-service consumption of upstream DCIs
In the ideal DevOps world, a product can be defined as structure data including the application and configuration of upstream DCIs. There are two distinct approaches; in order of preference.
- declarative; The desired state of a product version is defined. There needs to be a mechanism to compare the current state with the desired and to resolve any differences. The mechanism is idempotent by definition.
- imperative; A set of instructions to create, change or remove items that assume an explicit start state. Idempotency can be achieved only at high costs.
In a less ideal DevOps world, and a current-day reality, the product is defined in some non-machine readable form, a document combined with some diagrams. A release pipeline always can take the imperative approach to orchestrate the product instantiation and rely on underlying automation of creating instances of the individual DCIs.
Both approaches assume that the team / product consuming the upstream DCIs can rely on a self-service implementation. In the worse scenario, upstream DCIs can not be consumed via self-service and a request must be issued in some ticketing system for some resource to be created. This scenario will have a detrimental effect, and pulls the rug out from under DevOps and Continuous Delivery. In this scenario the team responsible for the product depends on potentially multiple other teams for the instantiation of the upstream DCIs. The worst case scenario such a request is not formalized as structured data and the request relies on human interpretation.
Bill Of Materials of DCIs
From a product management perspective, a Bill Of Materials (BOM) should give insight into the design reuse and therefore the dependencies with upstream teams. This assumes that DCIs and their ownership are centrally registered. This can be done by extending the product portfolio with ‘IT internal’ products and registering ownership, where each product has DCIs representing each specific version published. Since upstream products can rely on other upstream products themselves, inside or outside the enterprise, a full BOM can be generated that shows all transitive design reuse. This comes at a cost since BOM data is derived from design data. When the design data is structured data and has a declarative form, creating a BOM can be automated. In other cases, manual input is required, which should be avoided as much as possible.
Conclusion
To have full insight into the design-time dependencies between your DevOps team and/or other parties involved, design-time configuration management is required. This insight allows you to coordinate and structure the downstream and upstream information flows associated with different events; life cycle management events, defect discovery, vulnerability discovery, risk, etc. The ability to produce a BOM of each digital product and relate each line item with artifacts in tools can satisfy different enterprise stakeholder’s use cases that are otherwise difficult to implement.
Next article will go into more detail about what a DCI entails.
References
- Part Number, Wikipedia, published 2 November 2019, referenced 1 April 2021
- SEMVER https://semver.org/
- The IT4IT Reference Architecture, Version 2.1, Second Edition, April 2017, The Open Group, Van Haren Publishing
- Project to Product, by Mik Kersten, 2018, IT Revolution
- Flow Framework, https://flowframework.org/