DevOps is set of practice that combines software development team, Testing and IT operation team. It aims to continues high quality delivery. DevOps is a culture Development and IT Operation will work to gathers, to shorten the development life cycle. DevOps supported by Tools and its process.
Journey of Devops
Planning
Evaluate the current State
identify waste Elimination Opportunity
Identify Failure modes
capture how process flows (or doesn't) end to end (process, technology Date and people)
DevOps removing the traditional barriers between Development & IT operations. Under DevOps model Development team and IT Operation teams work together entire software life cycle. From Development to deployment till IT operations.
What are benefits of DevOps?
A lot more advantages in implementing DevOps I have mentioned here major impact
Improved Customer Experience
Goal of DevOps is to Deliver high quality software to the end user on stipulated time. DevOps is a culture that changes into different teams works for to achieve their goal. Which led to increase revenue to the organization.
Collaborations
DevOps promotes an environment where the different teams work together to achieve common organizational objectives. DevOps facilitates collaboration by breaking down the traditional methodologies among Dev/Ops/QA teams and encourages them to work together toward a single goal Creating more value for your organization, which will ultimately help you deliver more value to your customers.
Speed
DevOps is that increase the pace of at which you have your business practice. DevOps speeds up the pace at which you deliver your software, update, features and modifications through automated testing and integration. DevOps makes your developers keep an eye on the product throughout its entire life cycle for any software updates or bugs. This decreases the time to monitor, locate, and fix bugs. This decreases the time to monitor, locate, and fix bugs, which accelerates your time to market. You can also use Value Stream Mapping in DevOps. It will help you identify production bottlenecks and non-value adding processes. You will be able to work towards fixing them, and as a result, create value faster.
Digital Transformation
Every enterprise, in every industry is having to digitally transform the way they operate. This means using innovations in technology (e.g. mobile, IoT, connected cars etc.) to deliver new digital services that enhance customer experience and improve employee productivity. At the centre of these digital services is software. DevOps is essential to being able to deliver digital services at speed and with quality, and so the bottom-line advantage of DevOps is that it is a foundational element of successful digital transformation.
Security
A good DevOps strategy will be strengthening the whole system or Environment.
DevSecOps extends DevOps core component of development and operations. It introduces security as separate component. Its is not just security team. DevSecOps is responsible for security. It reduces the cost able to track security issues in the beginning Phase of development.so it will save money and Time It will be helpful in product release.
Cost Reduction
DevOps strategy’s biggest benefit from a business perspective is maximizing profitability. Interestingly, there are multiple ways through which DevOps cuts down the costs incurred in a business directly or indirectly.
Network down time
How does DevOps reduce it? The common cause poor service visibility, infrastructure overloaded.
DevOps, with its automated testing and continuous integration (CI) and continuous delivery (CD) practices, can help developers produce more efficient code and identify and fix bugs quickly. Application performance monitoring (APM). APM tools help trace. DevOps helps to reduce network downtime and save the money
The continuous delivery pipeline starts when the developer commits the code related to the microservice, configuration files (Ansible playbook, Chef cookbooks, or shell scripts), or infrastructure as a code such as CFT, ARM, GCP, or Terraform.
Based on organization policy, post-merge to the build branch, it will trigger the build.
Build Management
The pipeline defines the entire lifecycle of an application as code. This can be achieved in many ways, including a Jenkins or Spinnaker pipeline. Spinnaker is a cloud agnostic that can target any cloud platform and is YAML file based.
Entire stages of an application are written in a Jenkins File and are executed automatically. There can be N number of Jenkins masters and a pool of executors or agents to manage it efficiently. CloudBees JOC Enterprise manages shared slaves quite efficiently. Another way to scale Jenkins is by using DCOS and Marathon, which allow multiple Jenkins masters to share a single pool of resources to run builds. The dynamic destruction or creation of Jenkins agents is directly related to the demand.
Quality Management
SonarQube can analyze the outcome of this analysis includes quality measures and issues (instances where coding rules were broken). However, the software that is analyzed will vary depending on the language.
Repository Management
Artifacts built by Jenkins are pushed to the repository manager and can be tagged based on environments.
Docker Registry
The Docker daemon running on the CI server is going to build an image based on the DockerFile as a source code and will push it to the Docker registry. It can be DockerHub, AWS ECR, Google Container Registry, Azure Container Registry, or even a private registry.
Deployment Management
Here, artifacts are going to pass all the stages starting from dev to prod. We have to ensure that it passes each stage gate as per organization standards and is promoted to a higher environment using the proper tag(s).
Build Infrastructure in the Cloud
If it is a single cloud provider, based on the provider, we can use templates; for AWS, we have Cloud Formation Template, for Azure, we can use Azure Resource Manager, and for Google, it will be Google Cloud Platform Template. We have CLI in build tools agent, which will help us trigger it automatically and create the infrastructure for the target environment.
Terraform is cloud-agnostic and allows a single configuration to be used to manage multiple providers. It can even handle cross-cloud dependencies. This simplifies the management and orchestration of the infrastructure, helping operators build large-scale multi-cloud infrastructures.
Container Configuration
It is advisable to have the same container for all environments. There are multiple ways of configuring it. I have listed a few of them below:
· Set the application configuration dynamically via environment variables.
Map the config files via Docker Volumes.
Bake configuration into the container.
If provided as service, then Fetch it from Config Server.
Test Automation
A fast User Acceptance Test [UAT] feedback cycle is critical for continuous delivery to be successful. Automated Test-Driven Development [ATDD] is a necessity to establish a speedy feedback loop. With ecosystems like Docker and cloud infrastructure, automated tests that require compute, storage, and network environments get easier. For ATDD, we can use Mockito, Cucumber, or Selenium Grid.
Container Cluster Management
Using the microservices architecture, you can easily deploy containers and run your application. These containers are lightweight in comparison to Virtual Machines [VM] and more efficiently use the underlying infrastructure. They can be scaled up or down depending on the demand. In addition, they make it easier to add or remove applications between different environments.
Orchestration tools should have the following capabilities:
Provisioning
Monitoring
Service Discovery
Rolling Upgrades and Rollback
Configuration-as-text
Policies for Placement, Scalability, etc.
Administration
A few of the orchestration tools we can use are Kubernetes, Docker Swarm, Mesos+Marathon, Mesosphere DCOS, and Amazon EC2 Container Service.
Log Management
There is a plethora of log management tools available in the market, and Docker has introduced plug-ins for them. These can be installed as binaries.
Listed below are the various drivers for log management:
Fluentd — supporting TCP or Unix socket connections to fluentd
Journald — storing container logs in the system journal
Splunk — HTTP/HTTPS forwarding to Splunk server
Syslog Driver — supporting UDP, TCP, TLS
Gelf — UDP log forwarding to Graylog2
For a complete log management solution, additional tools need to be involved:
Log parser to structure logs, typically part of log shippers
Log indexing, visualisation and alerting
Elasticsearch and Kibana
Graylog OSS / Enterprise
Splunk
Monitoring Management
The Agent collects metrics and events from our systems and apps. You can install at least one Agent in the container, and then it can generate the metrics and publish them.
The user can see the number of containers over time and information across instances such as CPU usage, the operating system usage, and container usage.
Tools to concentrate for DevOps
We can make use IT Infrastructure tools to gain benefit DevOps few mentioned below