Separator

Accelerating Devops With Artificial Intelligence and Machine Learning

Separator
He brings 20 years of hands-on experience in Artificial Intelligence, Big Data, Cloud Platform Management, PaaS product development, and technical expertise in Spark, Hadoop, and HBase.

DevOps is now a fairly well-under-stood methodology, and its adoption is driven more by its benefits than because it is a mandated process within the organisation. Organisations adopting DevOps practices can deliver software fast-er, enabling a shorter time to market and the stabler deployed products increases customer satisfaction. As operations are easily scalable, organisations can grow business without having to increase their operating expenses proportionately. They can also manage cost on infrastructure utilisation, maintenance and upgrades, reducing unnecessary capital expense.

However, as organisations grow and scale, the efficiencies seen in the earlier part of the DevOps tend to plateau, mainly due to vast data volumes that make manual decision-making a critical bottle-neck. This is where artificial intelligence (AI) comes in. AI and DevOps work together for an efficient IT organisation. Especially for large organisations, there is a pressing need to identify problems and solve them in a time-bound manner intelligently. Infrastructure and operations (I&O) leaders must leverage AI techniques for data-driven and automated decision making to ensure business agility and stability.

Here are some areas where AI technologies will improve the value of DevOps adoption further.

Accelerated time-to-market
The concept of continuous workflows such as continuous integration, continuous delivery, continuous deployment, and more is well understood. Many tools make it easy for organisations to reap benefits very early after adoption. Having a process is just the beginning. These tools also produce data, which can be extracted to improve the process.

AI technologies are used to identify the development code that breaks, builds or causes unit test failures before it gets pushed to version control, based on past patterns. Automated checking is facilitated by using activity data such log files and applying machine learning (ML) models in automat-ed testing to emulate production-like conditions. Support Vector Machine algorithms can also be used for the classification and scoring of software packages attempting deployment.

Efficient feedback loop for continuous automation
The first step in the DevOps journey is to recognise the problem and decide that automation is the solution. Organisations must first be able to detect issues and misconfigurations in their environment. It is critical to achieving velocity - before you know whether you are working efficiently, you need to know your target.

The next step is to correct any issues discovered. It covers everything from defining a 'base' configuration for the organisation's infrastructure to deploying updates to application code to patching and upgrading systems to remediating vulnerabilities. Finally, organisations need a way to automate both these processes to ensure that issues do not recur. A fast feedback loop can re-run the detection after each change to ensure that the change was successfully implemented.

Codifying workflows is vital. FAANG-like organisations have matured in utilising the “as a Code” paradigm, and we now have Infrastructure as Code, Compliance as Code, Policy as Code, Workflow as Code. Tools and integrations have been around for a few years, helping even smaller organisations adopt such templates to create codified Detect-correct-automate workflows. The efficiency of this depends on the feedback loop. It is not just the speed but also accuracy that determines efficiency. It is another area where human intervention is required, which tends to become the bottleneck in process improvement. The application of AI is a natural fit in improving the feedback loops.

Organisations have started correlating data from different parts of the process, such as CI/CD systems, alerting and monitoring and support ticketing systems. Through such correlations using AI technologies, you can build feedback to promote or roll back a newly released software version


Organisations have started correlating data from different parts of the process, such as CI/CD systems, alerting and monitoring and support ticketing systems. Through such correlations using AI technologies, you can build feedback to promote or roll back a newly released software version.

The availability and affordability of scanners facilitate detecting vulnerabilities within the software code. However, narrowing down the vulnerabilities is not a trivial task. That is another evolving application of AI. Combining results from static and dynamic analyses data, more extensive user base data, and past security exploitation data can identify the real vulnerabilities and priorities.

Actionable insights for cost savings Public cloud infrastructures give the ability to scale fast when there is a need, which is why even large and data and security-sensitive enterprises adopt a cloud strategy. Security and cost are the primary concerns. The Flexera report shows that 92 per cent of enterprises have a hybrid cloud strategy exposing them to a multi-cloud environment. They also invest in tools for multi-cloud cost management. Adopting DevOps practices had provided visibility into infrastructure and its utilisation. It has given the ability to plan for capacity and drive cost optimisations.

With information about the reliability, the time required to scale, frequency of update, etc., I & O leaders decide which is the best infrastructure for deploying a particular workload. Should it be in a private cloud or a public cloud, or a hybrid cloud environment? Continuously training AI models with live data can help these models to provide highly accurate predictions for infrastructure capacity planning across private and public cloud environments. Even in cases of single public cloud environments, with the help of AI models, cost optimisations can be achieved on an ongoing basis. It is particularly effective when the traffic can be estimated accurately.

What to expect next
The application of AI tools comes in as a natural complement to DevOps practitioners. Building AI/ML expertise within an organisation needs a cautious approach. Poor data quality and data availability make it even more difficult. There is an opportunity for organisations with expertise in AI technologies to build easy-to-use and easily integratable tools, which are tailormade for specific use cases. We will see more such AI-enabled tools built for the DevOps ecosystem emerging to enable a continued increase in efficiencies.