Implementing Observability using AWS Distro for OpenTelemetry (ADOT) will present:
– Introduction Observability and Monitoring – Introduction OpenTelemetry and ADOT (AWS Distro for Open Telemetry) – Demo sample with Golang RESTful API
In conclusion, by implementing AWS Distro for OpenTelemetry (ADOT), you have taken a crucial step towards ensuring the performance, reliability, and security of your RESTful API application.
With it’s powerful features and seamless integration with OpenTelemetry, ADOT provides real-time visibility into your API, helping you to quickly identify and resolve performance issues and keep your API running smoothly.
All material presentation will also publish in series at my personal blogs: https://devopscorner.id, stay tune and keep in touch!
Docker adalah sistem operasi untuk kontainer. Mirip dengan cara mesin virtual memvirtualisasi (menghilangkan kebutuhan untuk secara langsung mengelola) perangkat keras server, kontainer memvirtualisasi sistem operasi server. Docker memberikan perintah sederhana yang dapat Anda gunakan untuk membuat, memulai, atau menghentikan kontainer.
Penasaran gak sih gimana caranya menggunakan Docker untuk aplikas kamu?
Tenang aja sesi kali ini Dwi Fahni Denni, AWS Community Builder – Infrastructure & Cloud Services Manager Xapiens dan Muhammad Syukur Abadi, Student & Developer at Ngalam Backend bakal ngenalin docker dari awal banget nih.
Amazon Web Services (AWS) merupakan penyedia layanan cloud yang aman dan telah digunakan secara luas di dunia termasuk startup.
AWS menawarkan lebih dari 200 layanan unggulan yang lengkap dari pusat data secara global. Penasaran ga sih gimana cara menerapkan AWS ini pada perusahaan kalian, dan apa keuntungan yang didapatkan jika suatu perusahaan menggunakan layanan AWS ini?
Temukan jawabannya dengan mengikuti sesi kali ini dengan tema “Introduction to AWS Services” bersama Dwi Fani Denni, AWS Community Builder / Infrastructure & Cloud Services Manager at Xapiens
Mark your calendar!
Hari/Tanggal: Jum’at, 9 September 2022 Waktu: 19.00 WIB
As an integral part of the DevOps culture, Cost Monitoring & Optimization is the most important element in monitoring and optimizing the use of infrastructure, especially in today’s cloud computing era. In this event, we will discuss the strategy of cost monitoring & optimization of infrastructure in using Kubernetes (EKS) on AWS.
In this session, we will discuss provisioning estimation costs, autoscaling systems, downscale schedules, and alerting systems for cost usage notifications from cost limitation budgets.
Don’t miss ZX Talk – Infrastructure Kubernetes (EKS) Cost Monitoring & Optimization which will be held on:
Date: Thursday, 23 June 2022 Time: 14.00 – 15.30 (2 – 3.30 pm) Jakarta Place: Virtual Meet
The AWS Community Builders program offers technical resources, mentorship, and networking opportunities to AWS technical enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. This directory contains all Community Builders who have chosen to be listed publicly.
In this article, I’d like to share about GitOps (Git Operations) and AWS Developer Tools to provide GitOps workflow.
I will separate our discussion with 3 section articles. This article will focus on GitOps and AWS Developer Tools, later on for 2 section articles in the future will discuss about The IaC (Infrastructure-as-Code) Tools using Terraform and The Implementation of IaC tools for provisioning infrastructure and it’s integration with AWS Developer Tools.
So, let’s go…
As Introduction, I will let you know about:
Introduction of GitOps
GitOps Workflows in DevSecOps
Introduction of GitOps
The introduction of GitOps (Git Operations) is starting from the term, if we take a look for the terms it self, there was a huge number of definition around the internet. In my opinion, the “GitOps” (Git Operations) is a way to do collaboration -as a team-, provide the operations of DevOps in releasing application, including version control, ci/cd and provisioning in modern cloud infrastructure.
Why we need GitOps?
Nowadays, thousands of delivering application took place in mobile platform. There was also numerous complexity of the application running background in web application. This application whether it’s running Web only or even in Mobile, had been built in more than a single programming language and more than a single layer of infrastructure architecture.
The more complex from the application, it will take more complex also in development & deployment process. In the GitOps workflow, collaboration team to work in sequence process is a part of DevOps culture, starting from versioning control of the source code, pull request (PR), pairing view code, approval process, delivering source code until testing in deployment environment.
This collaboration team also needed as a pair team to reproduce the steps or even to mitigate the errors that will come in the future.
When we look at the ideal process of delivering a service from source code to provisioning the environment, there are a number of stages and processes involved.
In this Diagram-1 for example, you can see how the GitOps pipeline works in a deployment container application. The developers carry out development inside an isolated environment using AWS Cloud9 as an IDE, and they use AWS Developer tools to run build, test and deploy the services in the GitOps flow.
At the beginning, source codes will be pushed into AWS CodeCommit to make a PR (Pull Requests) step, this will also trigger AWS CodeBuild services to run inspection code for the code quality & code security check. If the PR is approved, it will run the pipeline to build the container image that will be registered inside Amazon ECR as a container registry.
The building image will be identified with tagging or commit hash that will also be pulled by AWS CodeBuild as references and delivered to the staging environment. Meanwhile for the production environment, approval will be needed before deployment process.
GitOps Workflows in DevSecOps
If we integrate the security pipeline (as shown in Diagram-2), it will have 2 additional stages – SAST & DAST. The Static Application Security Testing (SAST) including Security Configuration Assessment (SCA) will run before the deployment of the staging environment. The Dynamic Application Security Testing (DAST) will inspect and test the staging environment deployment before it is moved to the production environment.
This DAST will use a standard application that comes from OWASP(The Open Web Application Security Project).
How to Choose The Best Pipeline?
Again, my answer is depend, in how complexity your application layer and what kind of infrastructure do you plan to deploy.
So, I hope this article bring you the knowledge about The GitOps and their workflow. We will see you again in other article for implementation GitOps flow with IaC Tools.
You must be logged in to post a comment.