Share This Page


Share This Page


Whether DevOps is a hype or a ‘magical word,’ it is mandated in most new projects now a days. Everyone wants to do this, but in most cases, it is done blindly. Mostly, DevOps is limited to automation i.e. few CI/CD pipelines. However, there is lot we can do with DevOps beyond CI/CD. To explore that, we first need to understand what we are lagging or avoiding.

There are few areas which are either grey or biased in DevOps like:

  • Intent – The intent of the DevOps is to increase speed, improve quality and cut overheads. However, in most of the cases, we focus more on the first two, rather than the last one.
  • Team - We often take a biased decision about the constituent for the DevOps team. We do not have a clear view on whether we should upgrade the ops teams or the app development team. Often, developers see it as downgrade, unless they hear about the paycheck of a DevOps engineer.
  • Automation - DevOps' biggest misfortune is that on the ground, it is limited to automation, no matter how much we talk about culture, practice, etc. If you don’t believe this, look at the JD of any DevOps role.

These misunderstandings/misalignments cause various limitations on DevOps, including centralized control, its incompleteness and lack of tracking. At the core of all these, it can be said that the focus is more on automation, rather than the context of the DevOps.


Data First DevOps (DFD) starts with the context capturing in terms of data. It focuses on data, along with automation. Data is the input, output and helper for automation. Everything revolves around data in the DFD. Having data in place helps in various aspects:

  • Conditional operations
  • Intern dependent operations
  • Dynamic additions in the system
  • Many other advance scenarios

The approach for DFD is fairly simple - think data, create data, store data, process data and produce data.

However, this is easier said than done. While the current approach to DevOps is highly focused on automation, DFD takes a slightly different approach. It, in no way, for instance, ignores or underestimates the importance of automation; it rather it adds value on top to provide more meaningful and complete results. The idea of DFD can be little disturbing to the purists, but is very natural to a lot of people. It states - start with contextual data and do whatever to implement the requirement. The tools used in DevOps are not meant to be limited to the standards. Go beyond that and think the DevOps as an application development itself.


DFD is an approach, backed by principles, which focuses on the importance of:

  1. Viewing almost every possible context in the DevOps process i.e. input, output or any process, as data.
  2. To store all the data in relevant data store
  3. Using the data to perform the DevOps activities

Below are a few terminologies which might be helpful:

DFD Terminologies

Process input

The input required to trigger a DevOps process, irrespective of DFD.

Input data

The data we use either to support the process input or the DevOps process.

Process output

The output of the DevOps process.

Output data

The metadata pertaining to the process output or the DevOps process.

Here are a few principles which can be followed to make things a little easier to adapt and implement.

  • Data First: Think of data in any element or process
  • Polyglot data store: Be open to store and retrieve data from different type of data stores
  • Data coherence: Ensure that you keep similar data together
  • Data Stability: Ensure that your data remains stable even if there are failures
  • Data safety: Keep your data safe by taking backups
  • Dumb pipeline and smart data: The most important principle! Your pipeline behaves as per the data fed into it.


Implementing DFD can have enormous benefits, depending upon your use case. However, at an abstract level, here are some key categorical benefits:

  1. Smart data and dumb pipeline principle make you future ready.
  2. By designing data and relations correctly, you can make your pipelines more user-friendly. Hence, you can hand over the pipelines to its users and thus increase productivity.
  3. With the relevant data in place, micro-authorization can be implemented in case of need. Basically, data-based decision-making is possible.
  4. The captured data can be used for various other future automations, along with the basic trend analysis of the DevOps operations.

DFD gives a new approach to implement DevOps, which is an additional layer on top of the existing automation. This extra bit goes a long way when used properly.

For detailed reading, refer


About the Author

Abhideep Chakravarty
Senior Architect, DevOps/ Cloud

Abhideep is Java + Cloud + DevOps architect, and is currently focused on DevOps. He believes in humane and meaningful solutions which help businesses and people in terms of increasing quality and productivity, instead of conventional solutioning. Abhideep, along with his team, has also developed a DevOps-related Mindtree IP. He has also worked with customers for on large cloud migrations using application lift and shift migration and application re-architecture strategy.

Other Stories by the Author

Let's Talk About Your Needs

Thank you for your submission. We'll be in touch.