Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
You Can Shape Trend Reports — Participate in DZone Research Surveys + Enter the Raffles!
Modern Microservices, Part 5: Persistence and Packaging
Kubernetes is a highly popular container orchestration platform designed to manage distributed applications at scale. With many advanced capabilities for deploying, scaling, and managing containers, It allows software engineers to build a highly flexible and resilient infrastructure. Additionally, it is important to note that it is an open-source software, that provides a declarative approach to application deployment and enables seamless scaling and load balancing across multiple nodes. With built-in fault tolerance and self-healing capabilities, Kubernetes ensures high availability and resiliency for your applications. One of the key advantages of Kubernetes is its ability to automate many operational tasks, abstracting the underlying complexities of the infrastructure, allowing developers to focus on application logic, and optimizing the performance of solutions. What Is ChatGPT? You've probably heard a lot about ChatGPT, it's a renowned language model that has revolutionized the field of natural language processing (NLP). bUILT by OpenAI, ChatGPT is powered by advanced artificial intelligence algorithms and trained on massive amounts of text data. ChatGPT's versatility goes beyond virtual assistants and chatbots as it can be applied to a wide range of natural language processing applications. Its ability to understand and generate human-like text makes it a valuable tool for automating tasks that involve understanding and processing written language. The underlying technology behind ChatGPT is based on deep learning and transformative models. The ChatGPT training process involves exposing the model to large amounts of text data from a variety of sources. This extensive training helps it learn the intricacies of the language, including grammar, semantics, and common patterns. Furthermore, the ability to tune the model with specific data means it can be tailored to perform well in specific domains or specialized tasks. Integrating ChatGPT (OpenAI) With Kubernetes: Overview Integrating Kubernetes with ChatGPT makes it possible to automate tasks related to the operation and management of applications deployed in Kubernetes clusters. Consequently, leveraging ChatGPT allows you to seamlessly interact with Kubernetes using text or voice commands, which in turn, enables the execution of complex operations with greater efficiency. Essentially, with this integration, you can streamline various tasks such as; Deploying applications Scaling resources Monitoring cluster health The integration empowers you to take advantage of ChatGPT's contextual language generation capabilities to communicate with Kubernetes in a natural and intuitive manner. Whether you are a developer, system administrator, or DevOps professional, this integration can revolutionize your operations and streamline your workflow. The outcome is more room to focus on higher-level strategic initiatives and improving overall productivity. Benefits of Integrating ChatGPT (OpenAI) With Kubernetes Automation: This integration simplifies and automates operational processes, reducing the need for manual intervention. Efficiency: Operations can be performed quickly and with greater accuracy, optimizing time and resources. Scalability: Kubernetes provides automatic scaling capabilities, allowing ChatGPT to manage to expand applications without additional effort. Monitoring: ChatGPT can provide real-time information about the state of Kubernetes clusters and applications, facilitating issue detection and resolution. How To Integrate ChatGPT (OpenAI) With Kubernetes: A Step-By-Step Guide At this point, we understand that you already have a suitable environment for integration, including the installation of Kubernetes and an OpenAI account for ChatGPT calls. Let’s proceed to show you how to configure the credentials for ChatGPT to access Kubernetes, using the `kubernetes-client` lib in the automation script for interactions with Kubernetes. First, create your Token on the OpenAI platform: We will forward messages to Slack about the status, and in case of problems in Kubernetes, ChatGPT will propose possible solutions to apply. Great, now let's configure the AgentChatGPT script, remember to change this: Bearer <your token> client = WebClient(token="<your token>" channel_id = "<your channel id>" Python import requests from slack_sdk import WebClient from kubernetes import client, config # Function to interact with the GPT model def interagir_chatgpt(message): endpoint = "https://api.openai.com/v1/chat/completions" prompt = "User: " + message response = requests.post( endpoint, headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "gpt-3.5-turbo", "message": [{"role": "system", "content": prompt}], }, ) response_data = response.json() chatgpt_response = response_data["choices"][0]["message"]["content"] return chatgpt_response # Function to send notification to Slack def send_notification_slack(message): client = WebClient(token="") channel_id = "" response = client.chat_postMessage(channel=channel_id, text=message) return response # Kubernetes Configuration config.load_kube_config() v1 = client.CoreV1Api() # Kubernetes cluster monitoring def monitoring_cluster_kubernetes(): while True: # Collecting Kubernetes cluster metrics, logs, and events def get_information_cluster(): # Logic for collecting Kubernetes cluster metrics metrics = v1.list_node() # Logic for collecting Kubernetes cluster logs logs = v1.read_namespaced_pod_log("POD_NAME", "NAMESPACE") # Logic for collecting Kubernetes cluster events events = v1.list_event_for_all_namespaces() return metrics, logs, events # Troubleshooting based on collected information def identify_problems(metrics, logs, events): problems = [] # Logic to analyze metrics and identify issues for metric in metrics.items: if metric.status.conditions is None or metric.status.conditions[-1].type != "Ready": problems.append(f"The node {metric.metadata.name} not ready.") # Logic to analyze the logs and identify problems if "ERROR" in logs: problems.append("Errors were found in pod logs.") # Logic to analyze events and identify problems for evento in events.items: if evento.type == "Warning": problems.append(f"A warning event has been logged: {event.message}") return problem # Kubernetes cluster monitoring def monitoring_cluster_kubernetes(): while True: metrics, logs, events = get_information_cluster() problems = identify_problems(metrics, logs, events) if problemas: # Logic to deal with identified problems for problem in problems: # Logic to deal with each problem individually # May include corrective actions, additional notifications, etc. print(f"Identified problem: {problem}") # Logic to wait a time interval between checks time.sleep(60) # Wait for 1 minute before performing the next check # Running the ChatGPT agent and monitoring the Kubernetes cluster if __name__ == "__main__": monitoring_cluster_kubernetes() if problem_detected: # Logic for generating troubleshooting recommendations with ChatGPT resposta_chatgpt = interact_chatgpt(description_problem) # Send notification to Slack with issue description and recommendation message_slack = f"Identified problem: {description_problem}\nRecomendation: {response_chatgpt}" send_notification_slack(message_slack) # Running the ChatGPT agent and monitoring the Kubernetes cluster if __name__ == "__main__": monitorar_cluster_kubernetes() Now use the Dockerfile example to build your container with ChatGPT Agent, remember it’s necessary to create volume with your Kube config: Dockerfile # Define the base image FROM python:3.9-slim # Copy the Python scripts to the working directory of the image COPY agent-chatgpt.py /app/agent-chatgpt.py # Define the working directory of the image WORKDIR /app # Install required dependencies RUN pip install requests slack_sdk kubernetes # Run the Python script when the image starts CMD ["python", "agent-chatgpt.py"] Congratulations, if everything is properly configured. Running the script at some point in the monitoring you may get messages similar to this: Best Practices for Using Kubernetes With ChatGPT (OpenAI) Security Implement appropriate security measures to protect access to Kubernetes by ChatGPT. Logging and Monitoring Implement robust logging and monitoring practices within your Kubernetes cluster. Use tools like Prometheus, Grafana, or Elasticsearch to collect and analyze logs and metrics from both the Kubernetes cluster and the ChatGPT agent. This will provide valuable insights into the performance, health, and usage patterns of your integrated system. Error Handling and Alerting Establish a comprehensive error handling and alerting system to promptly identify and respond to any issues or failures in the integration. Essentially, set up alerts and notifications for critical events, such as failures in communication with the Kubernetes API or unexpected errors in the ChatGPT agent. This will help you proactively address problems and ensure smooth operation. Scalability and Load Balancing Plan for scalability and load balancing within your integrated setup. Consider utilizing Kubernetes features like horizontal pod autoscaling and load balancing to efficiently handle varying workloads and user demands. This will ensure optimal performance and responsiveness of your ChatGPT agent while maintaining the desired level of scalability. Backup and Disaster Recovery Implement backup and disaster recovery mechanisms to protect your integrated environment. Regularly back up critical data, configurations, and models used by the ChatGPT agent. Furthermore, create and test disaster recovery procedures to minimize downtime and data loss in the event of system failures or disasters. Continuous Integration and Deployment Implement a robust CI/CD (Continuous Integration/Continuous Deployment) pipeline to streamline the deployment and updates of your integrated system. Additionally, automate the build, testing, and deployment processes for both the Kubernetes infrastructure and the ChatGPT agent to ensure a reliable and efficient release cycle. Documentation and Collaboration Maintain detailed documentation of your integration setup, including configurations, deployment steps, and troubleshooting guides. Also, encourage collaboration and knowledge sharing among team members working on the integration. This will facilitate better collaboration, smoother onboarding, and effective troubleshooting in the future. By incorporating these additional recommendations into your integration approach, you can further enhance the reliability, scalability, and maintainability of your Kubernetes and ChatGPT integration. Conclusion Integrating Kubernetes with ChatGPT (OpenAI) offers numerous benefits for managing operations and applications within Kubernetes clusters. By adhering to the best practices and following the step-by-step guide provided in this resource, you will be well-equipped to leverage the capabilities of ChatGPT for automating tasks and optimizing your Kubernetes environment. The combination of Kubernetes' advanced container orchestration capabilities and ChatGPT's contextual language generation empowers you to streamline operations, enhance efficiency, enable scalability, and facilitate real-time monitoring. Whether it's automating deployments, scaling applications, or troubleshooting issues, the integration of Kubernetes and ChatGPT can significantly improve the management and performance of your Kubernetes infrastructure. As you embark on this integration journey, remember to prioritize security measures, ensure continuous monitoring, and consider customizing the ChatGPT model with Kubernetes-specific data for more precise results. Maintaining version control and keeping track of Kubernetes configurations will also prove invaluable for troubleshooting and future updates.
What Are Cloud-Native Applications? Cloud-native applications mark a change in how software is created and rolled out, making use of the capabilities of cloud computing environments. These apps are structured as a set of services known as microservices, which interact through clear APIs. Containerization tools like Docker are commonly used to package each microservice along with its dependencies to ensure consistency across setups and enable deployment. Platforms like Kubernetes automate the management of apps handling tasks like scaling load balancing and service discovery. DevOps methods that stress collaboration between development and operations teams play a role in the native approach by enabling continuous integration, continuous delivery, and swift iteration. With flexibility and scalability at their core, native applications can adapt resources dynamically to meet changing workloads for performance and cost effectiveness. Furthermore, they prioritize resilience, with fault tolerance measures in place to handle failures gracefully and maintain availability. Embracing native principles enables organizations to speed up innovation, boost agility, and streamline their software development processes. The Runtime Security Model The concept of the Runtime Security Model pertains to the security measures and protocols implemented while an application is actively running. It involves a range of strategies and technologies aimed at safeguarding the application and its infrastructure from security risks during operation. Some key elements of the Runtime Security Model are: Access Controls: Enforcing access controls in real time ensures that only authorized users or processes can interact with the application and its data. This includes setting up authentication mechanisms like factor authentication (MFA) or OAuth to verify user identities and enforce proper authorization rules. Encryption: Encrypting data as the application runs helps prevent access or interception. This involves encrypting data during transmission using protocols like HTTPS or TLS as encrypting data at rest using encryption algorithms and secure storage methods. Runtime Monitoring: Continuous monitoring of the application's runtime environment is crucial for detecting and responding to security threats or irregularities. This involves keeping track of activities auditing events and monitoring system and network traffic. Vulnerability Management: Consistently assessing the app and its parts is important to catch any weaknesses and uphold a setting. Using automated tools for vulnerability checks can aid in spotting and ranking vulnerabilities by their seriousness making it easier to address them. Container Security: When utilizing containerization technology for deploying the application, it is vital to focus on container security. This includes activities like scanning container images for vulnerabilities, monitoring container behavior during runtime, and implementing security measures at the orchestration layer of containers. Secure Configuration Management: Ensuring configuration management of the application and its operating environment plays a role in reducing potential attack points and minimizing security threats. This involves steps such as strengthening operating systems, securing network settings, and deactivating services or functions that could create vulnerabilities. Runtime Threat Detection and Response: Having mechanisms in place for identifying and responding to real-time threats during operation is essential in handling security incidents. Techniques like analysis machine learning algorithms or leveraging threat intelligence feeds can aid in recognizing suspicious activities or potential breaches to enhance security posture. Types of Cloud Native Environments Cloud-native environments can be classified based on the technologies and deployment models they use. Virtual Machines (VMs): In environments based on VMs, applications are deployed within virtual servers. Each VM operates with its operating system, ensuring separation between applications. Hypervisors handle the distribution of resources (such as CPU, memory, and storage) to VMs. Cloud service providers offer sizes and configurations of VM instances for users to deploy and scale applications as required. Storage Units: Containers act as packages that contain an application and its necessary components facilitating deployment in settings. Cloud-native environments that rely on containers employ technologies like Docker to bundle applications into containers. These containers utilize the host operating systems kernel resulting in overhead compared to machines (VMs). Kubernetes serves as a platform for managing containerized applications at a scale. Container Services: Container services platforms offer a managed environment for deploying, orchestrating, and scaling applications without users needing to handle the complexities of the underlying infrastructure. These platforms simplify container orchestration tasks and allow developers to focus on building and deploying their applications effectively. Serverless Functions: In serverless functions, developers can run functions or code segments without the need to manage servers or infrastructure. Cloud providers allocate resources dynamically to execute these functions based on events or triggers. These serverless functions are typically stateless, event-triggered, and short-lived, making them ideal for event-driven architectures, time data processing, and microservices applications. Some examples of serverless platforms are AWS Lambda, Google Cloud Functions, and Azure Functions. Cloud Native Application Security Best Practices Securing cloud-based applications involves a strategy that covers levels of the application stack ranging from the underlying infrastructure to the actual application code. Let’s explore some guidelines for ensuring security in cloud-based applications: Secure Development Practices: Make sure to use coding techniques and guidelines like OWASP Top 10 to prevent security risks such as injection attacks, XSS, CSRF, and others. Incorporate code evaluations, static code checks, and automated security assessments (like SAST and DAST) during development to pinpoint and fix security weaknesses at a stage. Container Security: Scan container images frequently for vulnerabilities by utilizing tools such as Clair, Trivy, or Anchore. Make sure that container images originate from sources opt for base images and incorporate essential dependencies exclusively. Implement security measures during runtime, like SELinux, AppArmor, or seccomp profiles to restrict container privileges and minimize the risk of attacks. Network Security: Utilize network segmentation and firewalls to control the movement of data between parts of the application. Incorporate encryption methods like TLS/SSL to safeguard data during transit from eavesdropping and interception by parties. Employ Web Application Firewalls (WAFs) to screen HTTP traffic for both content and security threats. API Security: Permit API requests by utilizing API keys, OAuth tokens, or JWT tokens. Set up restrictions on usage, control the flow of traffic, and enforce access rules to deter misuse and counteract DDoS attacks. Clean input data to ward off injection assaults and uphold the integrity of data. Logging and Monitoring: Set up a system for logging and monitoring to keep tabs on security incidents and unusual events. Make use of SIEM (Security Information and Event Management) tools to gather and connect security logs from places to detect threats and respond to incidents. Create alerts and automated actions for any activities or breaches, in security. Incident Response and Disaster Recovery: Keep up a plan for responding to incidents that details steps for recognizing, controlling, and recovering from security issues routinely. Confirm the effectiveness of backup and disaster recovery protocols to safeguard data accuracy and reduce disruptions in case of an intrusion or breakdown. Cloud Native Security Tools and Platforms Various security tools and platforms are available to tackle the security challenges of safeguarding native applications and environments. Below are some standout examples categorized by their functions: 1. Container Security: Docker Security Scanning Docker Security Scanning is a feature offered by Docker Hub for storing Docker container images. It enables users to check Docker container images for security issues and receive alerts about any vulnerabilities found. Here's a breakdown of how Docker Security Scanning operates: Uploading Images: When a user uploads a Docker image to Docker Hub it gets in line for security scanning Detecting Vulnerabilities: Docker Hub utilizes databases of known vulnerabilities to scan through the layers of the container image looking for security flaws in operating system packages, libraries, and dependencies integrated into the image. Security Alerts: After completing the scanning process Docker Hub generates security alerts highlighting any vulnerabilities discovered in the image. These alerts detail information about each vulnerability, such as its severity level, affected components, and recommended steps for fixing them. Clair Clair is a tool used to scan vulnerabilities in the source of container images. It was created by CoreOS, which is now part of Red Hat. It is commonly utilized in security processes for containers to identify and address security flaws in Docker and OCI (Open Container Initiative) images. Let's delve into Clair and explore its functionality: Detecting Vulnerabilities: Clair analyzes container images and their layers to detect known security vulnerabilities present in the operating system packages, libraries, and dependencies included in the image. It compares the components within the image with an updated database of known vulnerabilities obtained from security advisories. Architecture Design: Clair is structured with an architecture that allows for scalable vulnerability scanning. It comprises components such as a database (commonly PostgreSQL), a REST API server, and worker processes responsible for fetching vulnerability data and carrying out scanning operations. Analyzing Static Data: Clair analyzes container images without running them, enabling swift and lightweight vulnerability checks. It extracts metadata from image manifests and scrutinizes layers to gather details about installed packages, libraries, and their respective versions. CVE Matching: Clair conducts a comparison between the elements in container images and the Common Vulnerabilities and Exposures (CVE) database to identify any vulnerabilities. It provides information on each vulnerability, such as its CVE ID, severity rating, impacted versions, as well as references and advisories. Integration With Container Orchestration Platforms: Clair can be connected with container orchestration platforms like Kubernetes to automate vulnerability scans during deployment. There are plugins and extensions for integration with popular container runtime environments and orchestrators. Customization and Extensibility: Clair is highly customizable and flexible allowing users to personalize vulnerability scanning policies set scanning thresholds and link up with external systems and tools. Users can create custom plugins and extensions to expand Clair's capabilities and mesh them into existing security processes and toolsets. Anchore Engine The Anchore Engine is a container security platform that originates from the source and focuses on analyzing, evaluating, and validating container images for security vulnerabilities, compliance with policies, and adherence to industry standards. It allows organizations to uphold security protocols and guarantee that applications in containers are constructed and launched securely in settings. Let me provide you with an overview of the Anchore Engine along with its features: Vulnerability Assessment: The Anchore Engine conducts vulnerability assessments on container images, pinpointing established security vulnerabilities in operating system packages, libraries, and dependencies. It uses databases like CVE (Common Vulnerabilities and Exposures) to compare components within container images with known vulnerabilities. Policy Assessment: Users can set up and enforce security policies through the Anchore Engine that define configurations, package versions, and vulnerability thresholds for container images. It assesses container images against these policies to ensure alignment with security practices and organizational guidelines. Image Digest Analysis and Metadata Evaluation: The Anchore Engine scrutinizes metadata from container images such as image digest, layer data, and package manifests to offer insights into their contents and interconnectedness. This assists users in grasping the makeup of container images while identifying security threats or compliance concerns. Customizable Policies and Whitelists: Users have the option to craft security policies as well as whitelists customized for their distinct needs and scenarios. Anchore Engine offers policy customization options allowing organizations to adjust vulnerability severity levels blacklist packages and conduct compliance checks according to their risk tolerance and regulatory requirements. Seamless Integration With CI/CD Pipelines: Anchore Engine smoothly integrates with CI/CD pipelines to automate security assessments and ensure policy adherence throughout the container lifecycle. It provides plugins and APIs for integration with CI/CD tools enabling automated scanning for vulnerabilities and enforcing policies during the build and deployment stages. Notification System and Alerts: Anchore Engine alerts users about security vulnerabilities, policy breaches, and compliance concerns found in container images via email notifications, webhook alerts, and connections to external notification systems. This feature enables responses to address security issues and maintain compliance with security standards. Scalability and Performance Optimization: Anchore Engine is built for scalability supporting analysis and scanning of container images across distributed environments. By leveraging processing and caching mechanisms, it enhances performance efficiency while reducing scanning durations. This ensures swift security assessments of container images on a large scale. Container Orchestration Security Securing container orchestration involves protecting the platform itself and the containerized tasks it oversees. As platforms such as Kubernetes, Kube, Sysdig, Docker Swarm, and Apache Mesos gain popularity for orchestrating and scaling containerized applications, prioritizing security measures becomes crucial. Kubernetes Security Policy: A Kubernetes functionality that sets security rules at the pod level by controlling access and managing volume mounts. Kube Bench: A tool that assesses Kubernetes clusters against industry practices defined in the CIS Kubernetes Benchmark. Docker Swarm: Docker Swarm is Docker's native clustering and orchestration tool. It simplifies the orchestration of containers by providing features like load balancing and service discovery. Sysdig Secure: A platform for securing containers that includes threat detection during runtime, managing vulnerabilities, and ensuring compliance in Kubernetes setups. 2. Serverless Security AWS Lambda Security Best Practices: AWS provides guidelines on securing serverless applications specifically on AWS Lambda. The OWASP Serverless Top 10 project highlights security risks in serverless setups and provides effective mitigation strategies. Snyk is a platform dedicated to identifying and fixing vulnerabilities in open-source dependencies. 3. API Security API security involves the practices, methods, and technologies utilized to safeguard APIs from entry, data breaches, and harmful attacks. As APIs serve a function in software development by facilitating communication and data interchange among various systems, ensuring their security is crucial for protecting sensitive data and upholding the reliability of applications and services. Here are some essential elements of API security: Authentication: Employ robust authentication techniques to confirm the identity of API users and guarantee that approved individuals and applications can reach protected resources. This may involve utilizing API keys, OAuth tokens, JWT (JSON Web Tokens), or client certificates for authentication. Authorization: Enforce access controls and authorization policies to limit access to API endpoints and resources based on the roles, permissions, and privileges of users. Implement role-based access control (RBAC) or attribute-based access control (ABAC) to establish and oversee authorization regulations. Encryption: Secure sensitive data transmitted through APIs by encrypting it to prevent interception or monitoring. Utilize transport layer security (TLS/SSL) to encrypt communications between clients and servers ensuring data confidentiality and integrity. Input Validation: To ensure the safety of our systems we carefully clean up any data that comes from API users. This helps us protect against attacks like injecting code, such as SQL injection or XSS (Cross Site Scripting). By using validation and sanitization techniques, we make sure to filter and clean user input before using it in our processes. Rate Limiting and Throttling: We have set up measures to control the flow of API requests in order to prevent misuse, Denial of Service (DoS) attacks, and brute force attacks. By setting limits on how many requests can be made based on factors like user identity, IP address, or API key, we reduce the risk of overwhelming our system and depleting resources. Audit Logging: Keeping track of all activities within our APIs is vital for monitoring access attempts and security incidents. By logging these events, we can keep an eye on user actions, detect any behavior, and investigate security concerns promptly. Our detailed audit logs contain information such as requests made to the API responses received, user identities involved timestamps for each action taken, and the outcome of those actions. API Gateway: We use API gateways as a hub for managing and securing all our APIs effectively. These gateways help us enforce security policies across APIs by handling tasks like authentication checks, authorization verifications, data encryption processes, and controlling request rates. With features such as access control mechanisms, traffic management tools, real-time monitoring capabilities, and-in depth analytics reports, we enhance the security posture and operational efficiency of our APIs. Regularly test the security of APIs by conducting security assessments like penetration testing, vulnerability scanning, and code reviews. This helps to identify and fix security flaws, misconfiguration, and vulnerabilities to ensure the security of APIs and their related components. 4. Google Cloud Security Command Center Google Cloud Security Command Center (Cloud SCC) is a security management and data protection platform provided by Google Cloud Platform (GCP). It offers comprehensive insights and oversight of security and compliance risks across the GCP infrastructure, services, and applications. Key features of Google Cloud Security Command Center include: Asset Inventory: Cloud SCC offers a perspective of all cloud assets deployed in an organization's GCP environment such as machines, containers, databases, storage buckets, and networking resources. It automatically classifies cloud assets while providing metadata and contextual information about each asset. Security Findings: Cloud SCC consolidates security findings and insights from GCP security services like Google Cloud Monitoring, Google Cloud Logging, as well as third-party security tools. It prioritizes security threats like vulnerabilities, misconfiguration, or suspicious activities across resources. Moreover, it offers advice for addressing these issues. Vulnerability Assessment: Through integration with tools like Google Cloud Security Scanner and third-party vulnerability management solutions, Cloud SCC conducts automated vulnerability scans to assess the security status of cloud assets. By pinpointing known vulnerabilities in operating systems, software packages, and dependencies, it furnishes reports on vulnerabilities along with guidance for remediation. Threat Detection: Cloud SCC utilizes Google Cloud Security Command Center for Threat Detection to promptly identify and address security threats and suspicious activities. It relies on machine learning algorithms, anomaly detection methods, and threat intelligence sources to scrutinize cloud logs and telemetry data for signs of compromise (IOCs) and security incidents. Policy Monitoring and Enforcement: Cloud SCC empowers organizations to establish and uphold security policies and compliance needs for resources through Security Health Analytics and Policy Intelligence. It constantly watches over resources for compliance breaches, misconfiguration, and deviations from security policies issuing alerts and notifications for resolution. Data Risk Assessment: Cloud SCC provides tools for assessing data risks to help organizations pinpoint data like identifiable information (PII) intellectual property and confidential data stored in GCP services. It evaluates data usage trends, access controls, and encryption configurations to evaluate data security risks and compliance status. Compliance Reporting: Cloud SCC includes predefined compliance frameworks such as CIS benchmarks GDPR regulations and HIPAA standards. It generates compliance reports along with dashboards that assist organizations in showcasing adherence to mandates and industry norms. 5. Security Information and Event Management (SIEM) Security Information and Event Management (SIEM) is a cybersecurity approach that involves gathering, consolidating, scrutinizing, and linking security data from sources within an organization's IT setup. SIEM solutions offer a view of security events, alarms, and occurrences, empowering organizations to effectively spot, investigate, and address security risks. Key elements and functionalities of SIEM solutions encompass: Data Gathering: SIEM solutions amass security-related data from origins like network devices, servers, endpoints, applications, cloud services, and security utilities. Data inputs may include logs, events, alarms, flow records, configuration files, and threat intelligence feeds. Standardization and Consolidation: SIEM platforms. Consolidate security data from sources into a uniform format for examination and linkage. This process involves interpreting information accurately while categorizing and aligning security events to streamline analysis and correlation. Analysis and Correlation: SIEM solutions link security events from sources to pinpoint trends, irregularities, and possible security incidents. They leverage correlation rules, heuristics, statistical analysis, and machine learning algorithms to detect activities, threats, and attack patterns. Alerting and Notification: SIEM systems generate alerts and notifications for security events and incidents that meet predefined criteria or thresholds. They send out notifications display dashboards and generate reports to alert security teams about possible security breaches, policy infringements, or unusual activities. Responding to Incidents: Security Information and Event Management (SIEM) solutions aid in the detection and response to incidents by offering tools for probing security events examining evidence and performing root cause analysis. They empower security teams to assess, prioritize, and address security incidents efficiently. Ensuring Compliance and Generating Reports: SIEM platforms assist in monitoring compliance status and generating reports by offering predefined compliance templates audit trails and reporting functionalities. They assist organizations in showcasing adherence to mandates, industry norms, and internal policies through automated reporting procedures. Integrating Systems and Streamlining Processes: SIEM solutions seamlessly integrate with security tools and technologies to enhance their capabilities while streamlining security workflows. They enable connections with threat intelligence platforms, endpoint detection and response (EDR) solutions, incident response tools, and security orchestration automation platforms for a cohesive approach. Adaptability and Efficiency: SIEM platforms are crafted for adaptability and efficiency to manage datasets securely while catering to the demands of large-scale implementations. They utilize distributed architectures along with data partitioning techniques coupled with data compression methods to enhance performance levels effectively. Conclusion Embracing cloud-native applications revolutionizes software development and leverages cloud computing's power for innovation and agility through microservices, Docker, and Kubernetes. However, robust security practices are essential to safeguard these environments effectively. With a holistic security approach, organizations can unlock cloud-native benefits while mitigating risks and ensuring resilience in modern software ecosystems.
Implementing CI/CD pipelines for Docker applications, especially when deploying to AWS environments like Lambda, requires a well-thought-out approach to ensure smooth, automated processes for both development and production stages. The following outlines how to set up a CI/CD pipeline, using AWS services and considering a Docker application scheduled to execute on AWS Lambda every 12 hours. Overview The goal is to automate the process from code commit to deployment, ensuring that any updates to the application are automatically tested and deployed to the development environment and, following approval, to production. AWS services like CodeCommit, CodeBuild, CodeDeploy, and Lambda, along with CloudWatch for scheduling, will be instrumental in this setup. Application Containerization With Docker Application containerization with Docker is a pivotal step in modernizing applications, ensuring consistent environments from development to production, and facilitating continuous integration and continuous deployment (CI/CD) processes. This section expands on how to effectively containerize an application using Docker, a platform that packages an application and all its dependencies into a Docker container to ensure it runs uniformly in any environment. Understanding Docker Containers Docker containers encapsulate everything an application needs to run: the application's code, runtime, libraries, environment variables, and configuration files. Unlike virtual machines, containers share the host system's kernel but run in isolated user spaces. This makes them lightweight, allowing for rapid startup and scalable deployment practices. Dockerizing an Application: The Process Creating a Dockerfile: A Dockerfile is a text document containing all the commands a user could call on the command line to assemble an image. Creating a Dockerfile involves specifying a base image (e.g., Python, Node.js), adding your application code, and defining commands to run the application. Example: For a Python-based application, your Dockerfile might start with something like FROM python:3.8-slim, followed by COPY . /app to copy your application into the container, and CMD ["python", "./app/my_app.py"] to run your application. Building the Docker Image: Once the Dockerfile is set up, use the Docker build command to create an image. This image packages up your application and its environment. Command Example: docker build -t my_app:1.0 . This command tells Docker to build an image named my_app with a tag of 1.0 based on the Dockerfile in the current directory (.). Running Your Docker Container: After building the image, run your application in a container using Docker's run command. Command Example: docker run -d -p 5000:5000 my_app:1.0 This command starts a container based on the my_app:1.0 image, mapping port 5000 of the container to port 5000 on the host, allowing you to access the application via localhost:5000. Best Practices for Dockerizing Applications Minimize Image Size: Use smaller base images and multi-stage builds to reduce the size of your Docker images, which speeds up build times and deployment. Leverage .dockerignore: Similar to .gitignore, a .dockerignore file can help you exclude files not relevant to the build (like temporary files or dependencies that should be fetched within the Dockerfile), making builds faster and more secure. Parameterize Configuration: Use environment variables for configuration that changes between environments (like database connection strings), making your containerized application more portable and secure. Logging and Monitoring: Ensure your application logs to stdout/stderr, allowing Docker to capture logs effectively, which can then be managed and monitored by external systems. Health Checks: Implement health checks in your Dockerfile to help Docker and orchestration tools like Kubernetes know if your application is running correctly. Source Control With GitLab Integrating a CI/CD pipeline with GitLab for deploying a Dockerized application to AWS Lambda involves several key steps, from setting up your GitLab repository for source control to automating deployments through GitLab CI/CD pipelines. In the context of our example — an e-commerce platform's price update microservice scheduled to run every 12 hours — let's break down how to set up source control with GitLab and provide a code example for the Lambda function. Initialize a GitLab Repository: Start by creating a new project in GitLab for your application. This repository will host your application code, Dockerfile, buildspec.yml, and .gitlab-ci.yml files. Push Your Application to GitLab: Clone your newly created repository locally. Add your application files, including the Dockerfile and any scripts or dependencies it has. Commit and push these changes to your GitLab repository. PowerShell git clone <your-gitlab-repository-url> # Add your application files to the repository git add . git commit -m "Initial commit with application and Dockerfile" git push -u origin master Set up .gitlab-ci.yml: The .gitlab-ci.yml file defines your CI/CD pipeline in GitLab. For deploying a Dockerized Lambda function, this file needs to include steps for building the Docker image, pushing it to Amazon ECR, and updating the Lambda function to use the new image. Code Example for AWS Lambda Function Before setting up the CI/CD pipeline, let's define the Lambda function. Assuming the microservice is written in Python, the function might look like this: Python import requests import boto3 def update_pricing_data(event, context): # Your code to fetch new pricing data pricing_data = requests.get("https://api.example.com/pricing").json() # Logic to update the database with new pricing data # For simplicity, we'll assume it's a direct call to an RDS instance or DynamoDB # Note: Ensure your Lambda function has the necessary permissions for database access db_client = boto3.client('dynamodb') for product in pricing_data['products']: # Example of updating DynamoDB (simplified) db_client.update_item( TableName='ProductPrices', Key={'productId': {'S': product['id']}, UpdateExpression='SET price = :val', ExpressionAttributeValues={':val': {'N': str(product['price'])} ) return { 'statusCode': 200, 'body': 'Product pricing updated successfully.' } This function fetches pricing data from an external API and updates a DynamoDB table with the new prices. Integrating AWS Lambda With GitLab CI/CD Dockerfile: Ensure your Dockerfile is set up to containerize your Lambda function correctly. AWS provides base images for Lambda which you can use as your starting point. Dockerfile # Example Dockerfile for a Python-based Lambda function FROM public.ecr.aws/lambda/python:3.8 # Copy function code and requirements.txt into the container image COPY update_pricing.py requirements.txt ./ # Install the function's dependencies RUN python3.8 -m pip install -r requirements.txt # Set the CMD to your handler CMD ["update_pricing.update_pricing_data"] .gitlab-ci.yml Example: Define the pipeline in .gitlab-ci.yml for automating the build and deployment process. YAML stages: - build - deploy build_image: stage: build script: - $(aws ecr get-login --no-include-email --region us-east-1) - docker build -t my_ecr_repo/my_lambda_function:latest . - docker push my_ecr_repo/my_lambda_function:latest deploy_lambda: stage: deploy script: - aws lambda update-function-code --function-name myLambdaFunction --image-uri my_ecr_repo/my_lambda_function:latest only: - master This CI/CD pipeline automates the process of building your Docker image, pushing it to Amazon ECR, and updating the AWS Lambda function to use the new image. Make sure to replace placeholders like my_ecr_repo/my_lambda_function with your actual ECR repository URI and adjust AWS CLI commands based on your setup and region. By following these steps and leveraging GitLab's CI/CD capabilities, you can automate the deployment process for your Dockerized AWS Lambda functions, ensuring that your e-commerce platform's price update microservice is always running with the latest codebase. Deploying Docker Application on AWS Lambda Deploying a Docker application on AWS Lambda involves several steps, starting from containerizing your application to configuring the Lambda function to use the Docker image. This process enables you to leverage the benefits of serverless architecture, such as scalability, cost-efficiency, and ease of deployment, for your containerized applications. Here’s how you can deploy a Docker application on AWS Lambda: Containerize Your Application Create a Dockerfile: Begin by defining a Dockerfile in your application's root directory. This file specifies the base image, dependencies, and other configurations needed to containerize your application. Dockerfile # Example Dockerfile for a Python-based application FROM public.ecr.aws/lambda/python:3.8 # Copy the application source code and requirements.txt COPY app.py requirements.txt ./ # Install any dependencies RUN python3.8 -m pip install -r requirements.txt # Define the handler function CMD ["app.handler"] Build the Docker Image: With the Dockerfile in place, build the Docker image using the Docker CLI. Ensure the image is compatible with AWS Lambda's container image requirements. bash PowerShell docker build -t my-lambda-app . Push the Docker Image to Amazon ECR: Create an ECR Repository: If you haven't already, create a new repository in Amazon Elastic Container Registry (ECR) to store your Docker image. PowerShell aws ecr create-repository --repository-name my-lambda-app Authenticate Docker to Your ECR Registry: Authenticate your Docker CLI to the Amazon ECR registry to push images. PowerShell aws ecr get-login-password --region <your-region> | docker login --username AWS --password-stdin <your-aws-account-id>.dkr.ecr.<your-region>.amazonaws.com Tag and Push the Docker Image: Tag your local Docker image with the ECR repository URI and push it to ECR. PowerShell docker tag my-lambda-app:latest <your-aws-account-id>.dkr.ecr.<your-region>.amazonaws.com/my-lambda-app:latest docker push <your-aws-account-id>.dkr.ecr.<your-region>.amazonaws.com/my-lambda-app:latest Create and Configure the AWS Lambda Function: Create a New Lambda Function: Go to the AWS Lambda console and create a new Lambda function. Choose the "Container image" option as your source and select the Docker image you pushed to ECR. Configure Runtime Settings: Specify the handler information if required. For container images, the handler corresponds to the CMD or ENTRYPOINT specified in the Dockerfile. Adjust Permissions and Resources: Set the appropriate execution role with permissions that your Lambda function needs to access AWS resources. Also, configure memory, timeout, and other resources according to your application's requirements. Testing and Deployment Deploy and Test: With the Lambda function configured, deploy it and perform tests to ensure it's working as expected. You can invoke the Lambda function manually from the AWS console or using the AWS CLI. Set Up Triggers (Optional): Depending on your use case, set up triggers to automatically invoke your Lambda function. For a Docker application that needs to execute periodically (e.g., every 12 hours), you can use Amazon CloudWatch Events to schedule the function. PowerShell aws events put-rule --name "MyScheduledRule" --schedule-expression "rate(12 hours)" aws lambda add-permission --function-name "myLambdaFunction" --action "lambda:InvokeFunction" --principal events.amazonaws.com --source-arn <arn-of-the-scheduled-rule> aws events put-targets --rule "MyScheduledRule" --targets "Id"="1","Arn"="<Lambda-function-ARN>" Deploying Docker Application on AWS Lambda Container Image Support: Ensure your application fits within the Lambda container image guidelines. You may need to adjust your Dockerfile to meet Lambda's requirements. Upload to ECR: Push your Docker image to Amazon ECR, which will serve as the source for Lambda to pull and execute the container. Create Lambda Function: Configure a new Lambda function to use the container image from ECR as its source. Set the execution role with appropriate permissions for Lambda operations. Scheduling Execution with AWS CloudWatch CloudWatch Event Rule: Set up a CloudWatch Event rule to trigger your Lambda function every 12 hours. Use a cron expression for scheduling (e.g., cron(0 */12 * * ? *)). Monitoring and Rollback CloudWatch Metrics and Logs: Utilize CloudWatch for monitoring application logs and performance metrics. Set alarms for any critical thresholds to be notified of issues. Rollback Strategy: Ensure your CI/CD pipeline supports rolling back to previous versions in case of deployment failures or critical issues in production. Conclusion Implementing CI/CD for Docker applications deploying to AWS environments, including Lambda for scheduled tasks, enhances operational efficiency, ensures code quality, and automates deployment processes. By leveraging AWS services and Docker, businesses can achieve a highly scalable and reliable deployment workflow for their applications.
NCache Java Edition with distributed cache technique is a powerful tool that helps Java applications run faster, handle more users, and be more reliable. In today's world, where people expect apps to work quickly and without any problems, knowing how to use NCache Java Edition is very important. It's a key piece of technology for both developers and businesses who want to make sure their apps can give users fast access to data and a smooth experience. This makes NCache Java Edition an important part of making great apps. This article is made especially for beginners to make the ideas and steps of adding NCache to your Java applications clear and easy to understand. It doesn't matter if you've been developing for years or if you're new to caching, this article will help you get a good start with NCache Java Edition. Let’s start with a step-by-step process to set up a development workstation for NCache with the Java setup. NCache Server Installation: Java Edition NCache has different deployment options. The classification is listed below: On-premises Cloud Using Docker/Kubernetes You can check all the deployment options and the package available for the deployment here. NCache recommends at least SO-16 (16GB RAM, 8v CPU) to get optimum performance in a production environment, for a higher transaction load we should go with SO-32, SO-64, or SO-128. NCache Server Deployment With Docker Image NCache provides different images (alachisoft/ncache - Docker Image | Docker Hub) for Windows and Linux platform Java edition. Let’s see how to deploy the NCache server using the latest Linux Docker image. Use the below Docker command to pull the latest image: Dockerfile docker pull alachisoft/ncache:latest-java Now we successfully pulled the Docker image. Run the Docker image using the Docker command below: For a development workstation: Dockerfile docker run --name ncache -itd -p 8251:8251 -p 9800:9800 -p 8300:8300 -p 8301:8301 alachisoft/ncache:latest-java Use the actual host configuration for the production NCache server: Dockerfile docker run –name ncache -itd –network host alachisoft/ncache:latest-java The above command will run the NCache server and listen to port 8251. Now, launch NCache Management Center using the browser (localhost:8251). You will get a modal popup to register your license key as shown below: Click on Start Free Trial to activate the free trial with the license key, using the form below. You can register your license key using this registration page form or the Docker command to register the license key as given below: Dockerfile docker exec -it ncache /opt/ncache/bin/tools/register-ncacheevaluation -firstname [registered first name] -lastname [registered last name] -company [registered company name] -email [registered e-mail id] -key [key] Now, open the NCache Management Center from the browser http://localhost:8251/. NCache Cache Cluster Let’s install one more image in a different instance, with proper network configuration. Use the document below for the network configuration with NCache docker image deployment: Create NCache Containers for Windows Server I deployed one image in the 10.0.0.4 instance and another in the 10.0.0.5 instance. I just hopped into 10.0.0.4 NCache Management Center and removed the default cluster cache created during the installation. Let’s create a new clustered cache using the NCache Management Center Wizard. Click on New from the Clustered Cache page as shown in the figure below: It’s a 7-step process to create a clustered cache with the NCache Management Center interface, which we will go through one by one. Step 1: In-Memory Store In this step, you can define the in-memory store type, the name of the clustered cache, and the serialization type. In my case, I named the clustered cache as demoCache and the serialization as JSON. Step 2: Caching Topology Define the caching topology in the screen; in my case, I just went with default options. Step 3: Cache Partitions and Size In this screen, we can define the cache partition size. In my case, I just went with the default value. With this option, it will skip step 4. Also, I added two server nodes: 10.0.0.4 and 10.0.0.5. Step 5: Cluster TCP Parameters Define the Cluster Port, Port Range, and Batch Interval values. In my case, I went with the default values. Step 6: Encryption and Compression Settings You can enable the encryption and compression settings in this step. I just went with default values. Step 7: Advanced Options You can enable eviction and also check other advanced options. In my case, I checked to start the cache on the finish. Finally, click on Finish. Once the process is complete, it will create and start the clustered cache with two nodes. (10.0.0.4 and 10.0.0.5). Now the cluster is formed. Start the Cache You can use the start option from the NCache Management Center to start the clustered cache, as shown in the below figure. You can also use the command below to start the server: PowerShell start-cache –name demoCache Run a Stress Test Click Test-Stress and select the duration to run the stress test. This is one of my favorite features in NCache Management Center where you can initiate a stress test with ease just by a button click. You can also use the commands below to start the server. For example, to initiate a Test-Stress for the demoCache cluster with default settings: PowerShell test-stress –cachename demoCache Click on Monitor to check the metrics. You can monitor the number of requests processed by each node. Click on Statistics to get the complete statistics of the clustered caches. SNMP Counter to Monitor NCache Simple Network Management Protocol (SNMP) is a key system used for keeping an eye on and managing different network devices and their activities. It's a part of the Internet Protocol Suite and helps in sharing important information about the network's health and operations between devices like routers, switches, servers, and printers. This allows network managers to change settings, track how well the network is doing, and get alerts on any issues. SNMP is widely used and important for keeping networks running smoothly and safely. It's a vital part of managing and fixing networks. NCache has made SNMP monitoring easier by now allowing the publication of counters through a single port. Before, a separate port was needed for each cache. Make sure the NCache service and cache(s) to monitor are up and running. Configure NCache Service The Alachisoft.NCache.Service.dll.config file, located in the %NCHOME%\bin\service folder, provides the ability to activate or deactivate the monitoring of cache counters via SNMP by modifying particular options. These options are marked by specific tags. Update the value for the tags below: <add key="NCacheServer.EnableSnmpMonitoring" value="true"/> <add key="NCacheServer.SnmpListenersInfoPort" value="8256"/> <add key="NCacheServer.EnableMetricsPublishing" value="true"/> Change the NCacheServer.EnableSnmpMonitoring tag to true to turn on or off the SNMP monitoring of NCache cache counters. Initially, this tag is off (false). Change the NCacheServer.SnmpListenersInfoPort tag to true to set the port for SNMP to listen on. The default port is 8256, but you can adjust it according to your needs. Change the NCacheServer.EnableMetricsPublishing tag to true if you want to start or stop sending metrics to the NCache Service. Remember to reboot the NCache Service once you've made the necessary adjustments to the service configuration files. SNMP Monitoring NCache has made available a single MIB file called alachisoft.mib that keeps track of various counters which can be checked using SNMP. This file tells you about the ports used for different types of caches and client activities. You can find this file at %NCHOME%\bin\resources. To look at these counters, you can use a program called MIB Browser Free Tool to go through the MIB file. Use port 8256 to connect with NCache, and open the SNMP Table from View to check all the attributes of the NCache as shown in the figure below: To check specific attribute details in the SNMP Table, first pick the attributes you want to see. For a sample, I have selected cacheName, cacheSize, cacheCount, fetchesPerSec, requestsPerSec, additionPerSec. Then, click on View from the menu at the top before you choose the SNMP Table. You'll then see the values of the counter in the table as shown in the figure below. Summary This article provides a beginner-friendly guide on how to get started with NCache Java Edition, covering essential steps such as installing the NCache server, deploying it using a Docker image, starting the cache, conducting a stress test to evaluate its performance, and monitoring its operation through JMX counters. It helps you get started with enhancing your Java application's speed and reliability by implementing distributed caching with NCache.
Docker has become an essential tool for developers, offering consistent and isolated environments without installing full-fledged products locally. The ideal setup for microservice development using Spring Boot with MySQL as the backend often involves a remotely hosted database. However, for rapid prototyping or local development, running a MySQL container through Docker offers a more streamlined approach. I encountered a couple of issues while attempting to set up this configuration with the help of Docker Desktop for a proof of concept. An online search revealed a lack of straightforward guides on integrating Spring Boot microservices with MySQL in Docker Desktop; most resources primarily focus on containerizing the Spring Boot application. Recognizing this gap, I decided to write this short article. Prerequisites Before diving in, we must have the following: A foundational understanding of Spring Boot and microservices architecture Familiarity with Docker containers Docker Desktop installed on our machine Docker Desktop Setup We can install Docker Desktop using this link. Installation is straightforward and includes steps that can be navigated efficiently, as illustrated in the accompanying screenshots. Configuring MySQL Container Once we have installed the Docker desktop when we launch, we will get through some standard questions, and we can skip the registration part. Once the desktop app is ready, then we need to search for the MySQL container, as shown below: We need to click Pull and then Run the container. Once you run the container, the settings dialog will pop up, as shown below. Please enter the settings as below: MYSQL_ROOT_PASSWORD: This environment variable specifies the password that will be set for the MySQL root superuser account. MYSQL_DATABASE: This environment variable allows us to specify the name of a database that will be created on image startup. If a user/password was supplied (see below), that user will be granted superuser access (corresponding to GRANT ALL) to this database. MYSQL_USER, MYSQL_PASSWORD: These variables are used to create a new user and set that user's password. This user will be granted superuser permissions for the database specified by the MYSQL_DATABASE variable. Upon running the container, Docker Desktop displays logs indicating the container's status. We can now connect to the MySQL instance using tools like MySQL Workbench to manage database objects. Spring Application Configuration In the Spring application, we can configure the configurations below in the application.properties. YAML spring.esign.datasource.jdbc-url=jdbc:mysql://localhost:3306/e-sign?allowPublicKeyRetrieval=true&useSSL=false spring.esign.datasource.username=e-sign spring.esign.datasource.password=Password1 We opted for a custom prefix spring.esign over the default spring.datasource for our database configuration within the Spring Boot application. This approach shines in scenarios where the application requires connections to multiple databases. To enable this custom configuration, we need to define the Spring Boot configuration class ESignDbConfig: Java @Configuration @EnableTransactionManagement @EnableJpaRepositories( entityManagerFactoryRef = "eSignEntityManagerFactory", transactionManagerRef = "eSignTransactionManager", basePackages ="com.icw.esign.repository") public class ESignDbConfig { @Bean("eSignDataSource") @ConfigurationProperties(prefix="spring.esign.datasource") public DataSource geteSignDataSource(){ return DataSourceBuilder.create().type(HikariDataSource.class).build(); } @Bean(name = "eSignEntityManagerFactory") public LocalContainerEntityManagerFactoryBean eSignEntityManagerFactory( EntityManagerFactoryBuilder builder, @Qualifier("eSignDataSource") DataSource dataSource) { return builder.dataSource(dataSource).packages("com.icw.esign.dao") .build(); } @Bean(name = "eSignTransactionManager") public PlatformTransactionManager eSignTransactionManager(@Qualifier("eSignEntityManagerFactory") EntityManagerFactory entityManagerFactory) { return new JpaTransactionManager(entityManagerFactory); } } @Bean("eSignDataSource"): This method defines a Spring bean for the eSign module's data source. The @ConfigurationProperties(prefix="spring.esign.datasource") annotation is used to automatically map and bind all configuration properties starting with spring.esign.datasource from the application's configuration files (like application.properties or application.yml) to this DataSource object. The method uses DataSourceBuilder to create and configure a HikariDataSource, a highly performant JDBC connection pool. This implies that the eSign module will use a dedicated database whose connection parameters are isolated from other modules or the main application database. @Bean(name = "eSignEntityManagerFactory"): This method creates a LocalContainerEntityManagerFactoryBean, which is responsible for creating the EntityManagerFactory. This factory is crucial for managing JPA entities specific to the eSign module. The EntityManagerFactory is configured to use the eSignDataSource for its database operations and to scan the package com.icw.esign.dao for entity classes. This means that only entities in this package or its subpackages will be managed by this EntityManagerFactory and thus, can access the eSign database. @Bean(name = "eSignTransactionManager"): This defines a PlatformTransactionManager specific way of managing transactions of the eSignmodule's EntityManagerFactory. This transaction manager ensures that all database operations performed by entities managed by the eSignEntityManagerFactory are wrapped in transactions. It enables the application to manage transaction boundaries, roll back operations on failures, and commit changes when operations succeed. Repository Now that we have defined configurations, we can create repository classes and build other objects required for the API endpoint. Java @Repository public class ESignDbRepository { private static final Logger logger = LoggerFactory.getLogger(ESignDbRepository.class); @Qualifier("eSignEntityManagerFactory") @Autowired private EntityManager entityManager; @Autowired ObjectMapper objectMapper; String P_GET_DOC_ESIGN_INFO = "p_get_doc_esign_info"; public List<DocESignMaster> getDocumentESignInfo(String docUUID) { StoredProcedureQuery proc = entityManager.createStoredProcedureQuery(P_GET_DOC_ESIGN_INFO, DocESignMaster.class); proc.registerStoredProcedureParameter("v_doc_uuid", String.class, ParameterMode.IN); proc.setParameter("v_doc_uuid", docUUID); try { return (List<DocESignMaster>) proc.getResultList(); } catch (PersistenceException ex) { logger.error("Error while fetching document eSign info for docUUID: {}", docUUID, ex); } return Collections.emptyList(); } } @Qualifier("eSignEntityManagerFactory"): Specifies which EntityManagerFactory should be used to create EntityManager, ensuring that the correct database configuration is used for eSign operations. Conclusion Integrating Spring Boot microservices with Docker Desktop streamlines microservice development and testing. This guide walks through the essential steps of setting up a Spring Boot application and ensuring seamless service communication with a MySQL container hosted on the Docker Desktop application. This quick setup guide is useful for proof of concept or setting up an isolated local development environment.
In the dynamic realm of software development and deployment, Docker has emerged as a cornerstone technology, revolutionizing the way developers package, distribute, and manage applications. Docker simplifies the process of handling applications by containerizing them, ensuring consistency across various computing environments. A critical aspect of Docker that often puzzles many is Docker networking. It’s an essential feature, enabling containers to communicate with each other and the outside world. This ultimate guide aims to demystify Docker networking, offering you tips, tricks, and best practices to leverage Docker networking effectively. Understanding Docker Networking Basics Docker networking allows containers to communicate with each other and with other networks. Docker provides several network drivers, each serving different use cases: Bridge: The default network driver for containers, ideal for running standalone containers that need to communicate. Host: Removes network isolation between the container and the Docker host, and uses the host’s networking directly. Overlay: Connects multiple Docker daemons together and enables swarm services to communicate with each other. Macvlan: Allows assigning a MAC address to a container, making it appear as a physical device on your network. None: Disables all networking. Creating a Custom Bridge Network Creating a custom bridge network enhances control over the network architecture, allowing containers to communicate on the same Docker host. Here’s how you can create and manage a custom bridge network: docker network create --driver bridge my_bridge_network This command creates a new bridge network named my_bridge_network. You can then run containers on this network using the --network option: docker run -d --network=my_bridge_network --name my_container alpine Networking Best Practices Isolate environments: Use separate networks for development, testing, and production environments to reduce the risk of accidental interference or security breaches. Leverage DNS for service discovery: Docker’s internal DNS resolves container names to IP addresses within the same network, simplifying service discovery. Secure communications: Use encrypted overlay networks for sensitive applications, especially when operating across multiple Docker hosts. Advanced Networking Tips Static IP assignments: While Docker dynamically assigns IP addresses, you might need static IPs for certain containers. This can be achieved by specifying the --ip flag when connecting a container to the network. However, manage this carefully to avoid IP conflicts. docker network connect --ip 172.18.0.22 my_bridge_network my_container Network aliases: When you have multiple containers that need to communicate with a single service, network aliases come in handy, allowing multiple container names to resolve to the same container. docker run -d --network=my_bridge_network --name my_service --network-alias service_alias alpine Monitor network traffic: Use Docker network inspect tools and third-party monitoring solutions to keep an eye on the network traffic between containers. This is crucial for diagnosing issues and ensuring optimal performance. docker network inspect my_bridge_network Utilize port mapping for public services: For services that need to be accessible outside the Docker host, map container ports to host ports. This is particularly useful for web servers, databases, or any services that must be accessible from the network. docker run -d -p 80:80 --name web_server nginx Troubleshooting Common Network Issues Connectivity issues: Check if the container is connected to the correct network and inspect firewall rules that may prevent communication. DNS resolution problems: Ensure the internal Docker DNS is correctly resolving container names. If not, consider specifying a custom DNS server in the Docker daemon configuration. Port conflicts: When mapping ports, ensure that the host port is not already in use to avoid conflicts leading to container startup failures. SDN: Software-Defined Firewalls Using Docker Networking Software Defined Networking (SDN) in Docker offers a powerful way to manage network traffic, apply security policies, and isolate network segments at a granular level. By leveraging Docker's networking capabilities, you can create sophisticated network topologies that include software-defined firewalls. This setup allows for precise control over how containers communicate, enhancing security and reducing the risk of unauthorized access. Example Scenario: Application With UI, Rest API, and Database In this scenario, we illustrate the use of Docker networking to create a multi-tier application architecture with enforced network boundaries: app-ui container, part of the frontend network, is designed to serve the user interface. rest-api container, part of the services network, handles business logic and processes API requests. Both app-ui and rest-api containers are also part of a shared network, enabling them to communicate directly. The database container is isolated in the backend network and is accessible only by the rest-api container, ensuring that direct access from the app-ui container is blocked. Network Configuration First, create the networks: docker network create frontend docker network create services docker network create shared docker network create backend Next, run the containers within their respective networks: # Run app-ui container in frontend and shared networks docker run -d --name app-ui --network frontend alpine docker network connect shared app-ui # Run rest-api container in services, shared, and connect to backend docker run -d --name rest-api --network services alpine docker network connect shared rest-api docker network connect backend rest-api # Run database container in backend network docker run -d --name database --network backend alpine In this setup, the app-ui container cannot directly access the database container, as they are in separate, isolated networks. The rest-api container acts as an intermediary, ensuring that only authorized services can interact with the database. This architecture mimics a software-defined firewall, where network policies are defined and enforced through Docker's networking capabilities. By carefully designing network topologies and leveraging Docker's network isolation features, you can create secure, scalable, and efficient application infrastructures that closely align with modern security best practices. Conclusion Mastering Docker networking is pivotal for developers and IT professionals looking to optimize container-based applications. By understanding the fundamentals and employing the tips and best practices outlined in this guide, you can enhance the performance, security, and reliability of your Docker deployments. Remember, the key to effective Docker networking lies in careful planning, consistent monitoring, and ongoing optimization based on the unique requirements of your environment. Leverage this guide to navigate the complexities of Docker networking, ensuring your containerized applications communicate efficiently and securely. With these insights, you're well on your way to becoming a Docker networking guru, ready to tackle the challenges of modern application deployment head-on.
kubectl, the command-line interface for running commands against Kubernetes clusters, is a vital tool for any software engineer working with Kubernetes. It offers a myriad of commands, each with its own set of options, making it a powerful tool for managing and troubleshooting Kubernetes environments. This article aims to elucidate some of the most useful kubectl commands that software engineers use in their day-to-day operations. 1. Checking the Cluster Status Before initiating any operation, it's crucial to get the cluster's status. Here are a few commands that help you do that: kubectl cluster-info: This command provides basic information about the cluster and its primary services. kubectl get nodes: This command lists all nodes that can be used to host applications. 2. Working With Pods Pods are the smallest deployable units in Kubernetes. The following commands help manage them: kubectl get pods: This command lists all Pods in the default namespace. kubectl describe pod [pod-name]: To get detailed information about a specific Pod, including events and state kubectl logs [pod-name]: This command shows the logs of the specified Pod, helpful for debugging. kubectl exec -it [pod-name] -- /bin/bash: This command opens an interactive shell inside the specified Pod, useful for debugging and inspection. 3. Working With Deployments Deployments are a higher-level concept that manages Pods. Here are some useful commands for dealing with deployments: kubectl get deployments: This command lists all deployments in the default namespace. kubectl describe deployment [deployment-name]: This command provides detailed information about a specific deployment. kubectl scale deployment [deployment-name] --replicas=[number-of-replicas]: This command helps scale a deployment by increasing or decreasing the number of replicas. kubectl rollout status deployment [deployment-name]: This command shows the status of the deployment rollout. 4. Working With Services Services are an abstract way to expose applications running on a set of Pods. The following commands can be used to manage services: kubectl get services: This command lists all services in the default namespace. kubectl describe service [service-name]: This command provides detailed information about a specific service. kubectl expose deployment [deployment-name] --type=NodePort --name=[service-name]: This command exposes a deployment as a service, making it accessible within the cluster or from the internet. 5. Working With ConfigMaps and Secrets ConfigMaps and Secrets are Kubernetes objects that allow you to separate your application's configuration from your code. Here are some commands to help manage them: kubectl get configmaps: This command lists all ConfigMaps in the default namespace. kubectl get secrets: This command lists all secrets in the default namespace. kubectl create configmap [configmap-name] --from-file=[path-to-file]: This command creates a new ConfigMap from a file. kubectl create secret generic [secret-name] --from-literal=key=value: This command creates a new secret. 6. Debugging and Troubleshooting Kubernetes offers several commands to help find and correct issues: kubectl top node: This command shows the CPU and memory usage of each node, which can be useful for identifying nodes that are under a lot of load. kubectl top pod: This command shows the CPU and memory usage of each Pod, which can be useful for identifying Pods that are using a lot of resources. kubectl get events --sort-by=.metadata.creationTimestamp: This command lists all events in the default namespace, sorted by their creation time. This can be helpful for identifying recent issues that might have occurred in the cluster. 7. Cleanup Kubernetes provides commands for cleaning up resources: kubectl delete pod [pod-name]: This command deletes the specified Pod. kubectl delete deployment [deployment-name]: This command deletes the specified deployment. kubectl delete service [service-name]: This command deletes the specified service. kubectl delete all --all: This command deletes all resources in the default namespace. Be careful with this one! 8. Working With Namespaces Namespaces are used in environments with many users spread across multiple teams. Here are some commands related to managing them: kubectl get namespaces: Lists all namespaces in your cluster kubectl create namespace [namespace-name]: Creates a new namespace kubectl config set-context --current --namespace=[namespace-name]: Changes the namespace for the current context 9. Managing Persistent Volumes Persistent volumes provide ways for pods to store data. Here are some commands to work with them: kubectl get pv: Lists all persistent volumes kubectl describe pv [volume-name]: Provides detailed information about a specific volume kubectl get pvc: Lists all persistent volume claims, which are requests for storage by a user 10. Dealing With Nodes Nodes are worker machines in Kubernetes and are a crucial part of the system. Here are some commands related to nodes: kubectl cordon [node-name]: Marks the node as unschedulable, preventing new Pods from being scheduled on the node kubectl uncordon [node-name]: Removes the unschedulable mark from the node, allowing new pods to be scheduled on the node kubectl drain [node-name]: Drains the node in preparation for maintenance 11. Resource Quotas and Limit Ranges These commands are useful for managing the consumption of compute resources: kubectl get quota: Lists all resource quotas in the current namespace kubectl describe limitrange [limit-range-name]: Provides detailed information about a specific limit range 12. Accessing API Objects These commands allow you to access raw API objects: kubectl api-resources: Lists all API resources available on the server kubectl explain [resource]: Provides documentation for the resource Conclusion Mastering kubectl commands is essential for efficiently managing Kubernetes clusters. While it may seem daunting at first, with regular use, these commands will become second nature. The commands listed above are just the tip of the iceberg; kubectl offers many more commands and options to explore. Remember, the flexibility of kubectl commands makes it a vital tool for any software engineer dealing with Kubernetes. The commands listed in this guide are just a subset of what kubectl can do. To explore more commands, you can always refer to the official Kubernetes documentation or use the kubectl help command.
This article is part of a series exploring a workshop guiding you through the open source project Fluent Bit, what it is, a basic installation, and setting up the first telemetry pipeline project. Learn how to manage your cloud-native data from source to destination using the telemetry pipeline phases covering collection, aggregation, transformation, and forwarding from any source to any destination. The previous article in this series helped with the installation of Fluent Bit on our local machine using the project source code. This time around, we'll learn how to use Fluent Bit in a container on our local machine, including how to run the container while using local configuration files. You can find more details in the accompanying workshop lab. Let's get started with Fluent Bit in a container. Installing Fluent Bit using a container image will be demonstrated here using the open-source project Podman. It's assumed you have previously installed the Podman command line tooling. It should also be noted that the following code and command line examples are all based on using an OSX machine. If you want to use another container tooling, such as Docker, most of the commands are the same with just a substitution of the tooling name (docker instead of podman). Containerized Fluent Bit While it's not that difficult to run Fluent Bit in a container, we'll also show you how to do it using local configuration files so that you can use it to build your first telemetry pipelines. It's pretty straightforward to run Fluent Bit in a container. Just start the container image as follows: $ podman run --name fb -ti cr.fluentbit.io/fluent/fluent-bit:2.2.2 Let's take a look at what this command is actually doing. First, you see a flag for giving the container a name we can reference (--name fb). Another is assigning the container a console for output and staying interactive (-ti) and finally, it's using the image version supported in this workshop (cr.fluentbit.io/fluent/fluent-bit:2.2.2). You'll notice the container starts and takes over the console for its output, where Fluent Bit is measuring CPU usage and dumping it to the console (CTRL-C will stop the container): Fluent Bit v2.2.2 * Copyright (C) 2015-2024 The Fluent Bit Authors * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io ____________________ < Fluent Bit v2.2.2 > ------------------- \ \ \ __---__ _- /--______ __--( / \ )XXXXXXXXXXX\v. .-XXX( O O )XXXXXXXXXXXXXXX- /XXX( U ) XXXXXXX\ /XXXXX( )--_ XXXXXXXXXXX\ /XXXXX/ ( O ) XXXXXX \XXXXX\ XXXXX/ / XXXXXX \__ \XXXXX XXXXXX__/ XXXXXX \__----> ---___ XXX__/ XXXXXX \__ / \- --__/ ___/\ XXXXXX / ___--/= \-\ ___/ XXXXXX '--- XXXXXX \-\/XXX\ XXXXXX /XXXXX \XXXXXXXXX \ /XXXXX/ \XXXXXX > _/XXXXX/ \XXXXX--__/ __-- XXXX/ -XXXXXXXX--------------- XXXXXX- \XXXXXXXXXXXXXXXXXXXXXXXXXX/ ""VXXXXXXXXXXXXXXXXXXV"" [2024/03/01 10:38:52] [ info] [fluent bit] version=2.2.2, commit=eeea396e88, pid=1 [2024/03/01 10:38:52] [ info] [storage] ver=1.5.1, type=memory, sync=normal, checksum=off, max_chunks_up=128 [2024/03/01 10:38:52] [ info] [cmetrics] version=0.6.6 [2024/03/01 10:38:52] [ info] [ctraces ] version=0.4.0 [2024/03/01 10:38:52] [ info] [input:cpu:cpu.0] initializing [2024/03/01 10:38:52] [ info] [input:cpu:cpu.0] storage_strategy='memory' (memory only) [2024/03/01 10:38:52] [ info] [sp] stream processor started [2024/03/01 10:38:52] [ info] [output:stdout:stdout.0] worker #0 started [0] cpu.local: [[1709289532.997338599, {}], {"cpu_p"=>0.250000, "user_p"=>0.000000, "system_p"=>0.250000, "cpu0.p_cpu"=>0.000000 [0] cpu.local: [[1709289533.996516160, {}], {"cpu_p"=>0.000000, "user_p"=>0.000000, "system_p"=>0.000000, "cpu0.p_cpu"=>0.000000 Should we encounter failures at any time during installation, testing, data population, or build results, don't worry! This can be rerun any time after you fix any problems reported. We might have to remove the Fluent Bit container depending on how far you get before something goes wrong. Just stop, remove, and restart it as follows: $ podman container stop fb $ podman container rm fb $ podman run --name fb -ti cr.fluentbit.io/fluent/fluent-bit:2.2.2 There might come a moment when we are going to want to stop working with Fluent Bit and pause until a later time. To do this, we can shut down our container environment by stopping the running Fluent Bit container and then stopping the Podman virtual machine as follows: $ podman container stop fb $ podman machine stop Let's take a look at building our container images with our specific telemetry pipeline configurations. Building Container Images Often you want to set up your specific configuration and add that to the container image. This means you build your own container image and then run that with custom configurations copied into the container image. For example, let's assume we have the following files in our current directory, all parts of a Fluent Bit telemetry pipeline configuration: workshop-fb.conf : The main configuration file importing all other split-out configuration files inputs.conf: File containing all input plugin configurations outputs.conf: File container all output configurations To build a container image, we need to provide a Buildfile, which defines what base container image to use and lists where to copy our above configuration files into that base image as shown here: FROM cr.fluentbit.io/fluent/fluent-bit:2.2.2 COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf COPY ./inputs.conf /fluent-bit/etc/inputs.conf COPY ./outputs.conf /fluent-bit/etc/outputs.conf Once we have this file, we can now build our container image as follows: $ podman build -t workshop-fb:v1 -f Buildfile STEP 1/4: FROM cr.fluentbit.io/fluent/fluent-bit:2.2.2 STEP 2/4: COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf --> a379e7611210 STEP 3/4: COPY ./inputs.conf /fluent-bit/etc/inputs.conf --> f39b10d3d6d0 STEP 4/4: COPY ./outputs.conf /fluent-bit/etc/outputs.conf COMMIT workshop-fb:v1 --> b06df84452b6 Successfully tagged localhost/workshop-fb:v1 b06df84452b6eb7a040b75a1cc4088c0739a6a4e2a8bbc2007608529576ebeba The build command uses the flag -t workshop-fb:v1, which tags the container image with a name and version number. This helps us to run the image by name later in the next workshop lab. Furthermore, it makes use of the -f Buildfile we just created to copy in our custom configuration files as shown in the build output. This process now has us ready to start building our first pipelines, but what if you want to use other versions of Fluent Bit container images? Other Versions You might be wondering how to run other versions of Fluent Bit in a container. For example, Fluent Bit v3.0.0 was just released, so the workshop will be updated to follow the releases. Feel free to give this a try by starting the container image as follows: $ podman run --name fb -ti cr.fluentbit.io/fluent/fluent-bit:3.0.0 This starts the container and takes over the console for its output, whereas Fluent Bit measures CPU usage and dumps it to the console (CTRL-C will stop the container): Fluent Bit v3.0.0 * Copyright (C) 2015-2024 The Fluent Bit Authors * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io ___________.__ __ __________.__ __ ________ \_ _____/| | __ __ ____ _____/ |_ \______ \__|/ |_ ___ _\_____ \ | __) | | | | \_/ __ \ / \ __\ | | _/ \ __\ \ \/ / _(__ < | \ | |_| | /\ ___/| | \ | | | \ || | \ / / \ \___ / |____/____/ \___ >___| /__| |______ /__||__| \_/ /______ / \/ \/ \/ \/ \/ [2024/03/29 15:29:26] [ info] [fluent bit] version=3.0.0, commit=f499a4fbe1, pid=1 [2024/03/29 15:29:26] [ info] [storage] ver=1.5.1, type=memory, sync=normal, checksum=off, max_chunks_up=128 [2024/03/29 15:29:26] [ info] [cmetrics] version=0.7.0 [2024/03/29 15:29:26] [ info] [ctraces ] version=0.4.0 [2024/03/29 15:29:26] [ info] [input:cpu:cpu.0] initializing [2024/03/29 15:29:26] [ info] [input:cpu:cpu.0] storage_strategy='memory' (memory only) [2024/03/29 15:29:26] [ info] [output:stdout:stdout.0] worker #0 started [2024/03/29 15:29:26] [ info] [sp] stream processor started [0] cpu.local: [[1711726167.221360412, {}], {"cpu_p"=>0.000000, "user_p"=>0.000000, "system_p"=>0.000000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>1.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000}] [0] cpu.local: [[1711726168.216299447, {}], {"cpu_p"=>1.000000, "user_p"=>0.500000, "system_p"=>0.500000, "cpu0.p_cpu"=>0.000000, "cpu0.p_user"=>0.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000}] This puts all available container images at your fingertips. What's Next? This article helped us install Fluent Bit on our local machine using the available container builds. The series continues with the next step in this workshop, creating our telemetry pipelines using either the source install or container images. Stay tuned for more hands-on material to help you with your cloud-native observability journey.
Are you using the C4 model to create your architecture diagrams? Then Structurizr might be a good option for you to consider. With Structurizr, you can create and maintain your diagrams as code. Let’s take a closer look at it in this blog! Introduction The C4 model helps you with visualizing software architecture. We all know the whiteboard diagrams cluttered with boxes and connectors. The C4 model approach helps you visualize software architecture in a more structured way. A good explanation is given on the C4 model website, so if you do not know what it is, it is worth reading it first. The next question is which tool you use to create the diagrams. You can use Visio, draw.io, PlantUML, even PowerPoint, or whatever tool you normally use for creating diagrams. However, these tools do not check whether naming, relations, etc. are consistently used in the different diagrams. Besides that, it might be difficult to review new versions of diagrams because it is not clear which changes are made. In order to solve these problems, Simon Brown, the author of the C4 model, created Structurizr. Structurizr allows you to create diagrams as code. Based on the code, Structurizr visualizes the diagrams for you and allows you to interact with the visualization. Because the diagrams are maintained in code, you can add them to your version control system (Git) and changes in the diagrams are tracked and can be easily reviewed. In the remaining part of this blog, you will explore some of the features of Structurizr. You will only use two diagram types of the C4 model (the most commonly used ones): System context diagram: Your application as a black box indicating the users of your application Container diagram: An overview of your software architecture Sources used in this blog can be found on GitHub. Prerequisites Prerequisites for this blog are: Basic knowledge of the C4 model Basic knowledge of Docker Linux is used, so if you are using a different Operating System, you will need to adjust the commands accordingly. Installation There are different installation options for Structurizr. In this blog, you will make use of Structurizr Lite, which is an easy installation using Docker which supports one workspace. Create in the root of the repository a data directory. This directory will be mapped as a volume in the Docker container. Execute the following commands from within the root of the repository. Shell $ docker pull structurizr/lite $ docker run -it --rm -p 8080:8080 -v ./data:/usr/local/structurizr structurizr/lite Navigate in your browser to http://localhost:8080 and the Structurizr webpage is shown. In the data directory, you notice that some data is added. When you take a closer look at it, you notice that all files have the root ownership. This is not very convenient, because when you want to edit the files, you have to do so as root. Shell $ ls -la ... drwxr-xr-x 1 root root 22 feb 18 12:04 .structurizr/ -rw-r--r-- 1 root root 316 feb 18 12:03 workspace.dsl -rw-r--r-- 1 root root 2218 feb 18 12:04 workspace.json Stop the container with CTRL+C and remove the contents of the data directory. You will start the container with the same user you are using on your host machine. With the id command, you can retrieve your uid and gid. It is important that you use the same uid and gid inside the container in order that files can be edited easily inside and outside the container. Replace in the following command <uid> and <gid> with your uid and gid and start the Docker container again. Shell $ docker run -it --rm -p 8080:8080 -u <uid>:<gid> -v ./data:/usr/local/structurizr structurizr/lite When you check the ownership of the files again, you will notice that the directory and files are now owned by your host user. Navigate again in the browser to Structurizr and enable Auto canvas size and Auto-layout. This will create a more beautiful diagram. Initial DSL First, let’s take a closer look at the initial DSL that has been created. The complete DSL reference can be found here. The initial DSL is the following: JSON workspace { model { user = person "User" softwareSystem = softwareSystem "Software System" user -> softwareSystem "Uses" } views { systemContext softwareSystem "Diagram1" { include * } } configuration { scope softwaresystem } } The following sections can be viewed: model: The model contains the actors and the software system. If you need to reference these in the DSL, e.g., in relations, you assign them to a variable. The variables user and softwareSystem are used here. The model also contains the relations. One relation from user to softwareSystem is created. views: To visualize the model, you need to create views. In this initial DSL, a view of the System Context Diagram is created. With the include keyword, you can include all or a part of the model. configuration: This section will not be covered in this blog. Basic System Context DSL Now it is time to create a System Context Diagram for your application. The application is a webshop with two types of users, a customer and an administrator. The webshop makes use of a global payment system that handles bank transactions. The DSL is the following: JSON workspace { model { customer = person "Customer" "The customer of our webshop" administrator = person "Administrator" "The administrator of the webshop" globalPayment = softwareSystem "Global Payment" "Used for all banking transactions" myWebshop = softwareSystem "My Webshop" "Our beautiful webshop" customer -> myWebshop "Uses" administrator -> myWebshop "Uses" myWebshop -> globalPayment "Uses" } views { systemContext myWebshop "MyWebshopSystemContextView" { include * autolayout } } } Some things to notice here: Relations have the following format: identifier -> identifier description technology. The identifiers must correspond to a variable defined above the relations. Views have the following format: systemContext softwareSystem key. The softwareSystem must correspond to an identifier defined in the model. The key can be chosen freely. The autolayout option can be added to the view so that it is enabled by default for this view. A problem I encountered is the following: Only one person was shown in the view, the last one defined. I managed to solve this by commenting on the entire view section. A default view is used this way. With this default view, all persons were shown. After this, I enabled the view section again, and now all persons were shown. The System Context Diagram is shown as follows. You can also apply themes to the views, which will enhance your diagram. You can also apply themes to the views, which will enhance your diagram. JSON views { systemContext myWebshop "MyWebshopSystemContextView" { include * autolayout } theme default } The System Context Diagram becomes the following. This already looks more like a C4 model System Context Diagram. Basic Container DSL Time to create a diagram for the software architecture, the Container Diagram. Assume that you need a frontend for both users, a common backend, and, of course, a database. The DSL becomes the following: JSON workspace { model { customer = person "Customer" "The customer of our webshop" administrator = person "Administrator" "The administrator of the webshop" globalPayment = softwareSystem "Global Payment" "Used for all banking transactions" myWebshop = softwareSystem "My Webshop" "Our beautiful webshop" { customerFrontend = container customerFrontend "The frontend for the customer" administratorFrontend = container administratorFrontend "The frontend for the administrator" webshopBackend = container webshopBackend "The webshop backend" webshopDatabase = container webshopDatabase "The webshop database" } // system context relationships customer -> myWebshop "Uses" administrator -> myWebshop "Uses" myWebshop -> globalPayment "Uses" // software system relationships customer -> customerFrontend "Uses" "https" administrator -> administratorFrontend "Uses" "https" customerFrontend -> webshopBackend "Uses" "http" administratorFrontend -> webshopBackend "Uses" "http" webshopBackend -> webshopDatabase "Uses" "ODBC" webshopBackend -> globalPayment "Uses" "https" } views { systemContext myWebshop "MyWebshopSystemContextView" { include * autolayout } container myWebshop "MyWebshopSoftwareSystemView" { include * autolayout } theme default } } What has been added to the DSL? The application myWebshop in the model has been extended with the containers defining the architecture. In the model, the relations are defined between the containers. Note that this time, also the used technology is added to the relations. A container view is added to the views. The container view is represented as follows. The fun part is that when you navigate to the System Context Diagram, you can double-click the myWebshop software system and it will show you the Container Diagram. Awesome! Styling In the Container Diagram, the database is represented as a rounded box. Normally, a database is represented as a cylinder. Is it possible to adjust this? Yes, you can. This can be done by means of styles. The list of possible shapes can be found here. A style can be applied to an element by using a tag. First, in the views section, add a style for an element with the tag Database. You apply a shape Cylinder to this element. JSON views { ... styles { element "Database" { shape Cylinder } } } Now you need to add a tag to the corresponding container. When you define a container, you need to define it with the following format: container [description] [technology] [tags]. In the container definition, you did not specify the technology. There are two options here: Add a technology to the container. This is a bit error-prone, as you have to know the format by heart and you can simply forget to add the technology. Set the tags explicitly. This is the one chosen here. JSON webshopDatabase = container webshopDatabase "The webshop database" { tags "Database" } The complete DSL can be found on GitHub. The resulting diagram is the following. Conclusion Structurizr helps you with creating diagrams according to the C4 model. The diagrams are created by means of a DSL which has several advantages. You need to learn the DSL of course, but this can be learned quite easily.
What Is Patch Management? Patch management is a proactive approach to mitigate already-identified security gaps in software. Most of the time, these patches are provided by third-party vendors to proactively close the security gaps and secure the platform, for example. RedHat provides security advisories and patches for various RedHat products such as RHEL, OpenShift, OpenStack, etc. Microsoft provides patches in the form of updates for Windows OS. These patches include updates to third-party libraries, modules, packages, or utilities. Patches are prioritized and, in most organizations, patching of systems is done at a specific cadence and handled through a change control process. These patches are deployed through lower environments first to understand the impact and then applied in higher environments, such as production. Various tools such as Ansible and Puppet can handle patch management seamlessly for enterprise infrastructures. These tools can automate the patch management process, ensuring that security patches and updates are promptly applied to minimize application disruptions and security risks. Coordination for patching and testing with various stakeholders using infrastructure is a big deal to minimize interruptions. What Is a Container? A container is the smallest unit of software that runs in the container platform. Unlike traditional software that, in most cases, includes application-specific components such as application files, executables, or binaries, containers include the operating system required to run the application and all other dependencies for the application. Containers include everything needed to run the application; hence, they are self-contained and provide greater isolation. With all necessary components packaged together, containers provide inherent security and control, but at the same time, are more vulnerable to threats. Containers are created using a container image, and a container image is created using a Dockerfile/Containerfile that includes instructions for building an image. Most of the container images use open-source components. Therefore, organizations have to make efforts to design and develop recommended methods to secure containers and container platforms. The traditional security strategies and tools would not work for securing containers. DZone’ previously covered how to health check Docker containers. For infrastructure using physical machines or virtual machines for hosting applications, the operations team would SSH to servers (manually or with automation) and then upgrade the system to the latest version or latest patch on a specific cadence. If the application team needs to make any changes such as updating configurations or libraries, they would do the same thing by logging in to the server and making changes. If you know what this means, in various cases, the servers are configured for running specific applications. In this case, the server becomes a pet that needs to be cared for as it creates a dependency for the application, and keeping such servers updated with the latest patches sometimes becomes challenging due to dependency issues. If the server is shared with multiple applications, then updating or patching such servers consumes a lot of effort from everyone involved to make sure applications run smoothly post-upgrade. However, containers are meant to be immutable once created and expected to be short-lived. As mentioned earlier, containers are created from container images; so it's really the container image that needs to be patched. Every image contains one or more file system layers which are built based on the instructions from Containerfile/Dockerfile. Let’s further delve into how to do the patch management and vulnerability management for containers. What Is Vulnerability Management? While patch management is proactive, vulnerability management is a reactive approach to managing and maintaining the security posture within an organization. Platforms and systems are scanned in real-time, at specific schedules, or on an ad hoc basis to identify common vulnerabilities. These are also known as CVEs (Common Vulnerability and Exposures). The tools that are used to discover CVEs use various vulnerability databases such as the U.S. National Vulnerability Database (NVD) and the CERT/CC Vulnerability Notes Database. Most of the vendors that provide scanning tools also maintain their own database to compare the CVEs and score them based on the impact. Every CVE gets a unique code along with a score in terms of severity (CVSS) and resolution, if any (e.g., CVE-2023-52136). Once the CVEs are discovered, these are categorized based on the severity and prioritized based on the impact. Not every Common Vulnerabilities and Exposure (CVE) has a resolution available. Therefore, organizations must continuously monitor such CVEs to comprehend their impact and implement measures to mitigate them. This could involve taking steps such as temporarily removing the system from the network or shutting down the system until a suitable solution is found. High-severity and critical vulnerabilities should be remediated so that they can no longer be exploited. As is evident, patch management and vulnerability management are intrinsically linked in terms of security. Their shared objective is to safeguard an organization's infrastructure and data from cyber threats. Container Security Container security entails safeguarding containerized workloads and the broader ecosystem through a mix of various security tools and technologies. Patch management and vulnerability management are integral parts of this process. The container ecosystem is also often referred to as a container supply chain. The container supply chain includes various components. When we talk about securing containers, it is essentially monitoring and securing the various components listed below. Containers A container is also called a runtime instance of a container image. It uses instructions provided in the container image to run itself. The container has lifecycle stages such as create, start, run, stop, delete, etc. This is the smallest unit which has existence in the container platform and you can log in to it, execute commands, monitor it, etc. Container Orchestration Platform Orchestration platforms provide various capabilities such as HA, scalability, self-healing, logging, monitoring, and visibility for container workloads. Container Registry A container registry includes one or more repositories where container images are stored, are version-controlled, and made available to container platforms. Container Images A container image is sometimes also called a build time instance of a container. It is a read-only template or artifact that includes everything needed to start and run the container (e.g., minimal operating system, libraries, packages, software) along with how to run and configure the container. Development Workspaces The development workspaces reside on developer workstations that are used for writing code, packaging applications, and creating and testing containers. Container Images: The Most Dynamic Component Considering the patch management and vulnerability management for containers, let's focus on container images, the most dynamic component of the supply chain. In the container management workflow, most of the exploits are encountered due to various security gaps in container images. Let’s categorize various container images used in the organization based on hierarchy. 1. Base Images This is the first level in the image hierarchy. As the name indicates, these base images are used as parent images for most of the custom images that are built within the organization. These images are pulled down from various external public and private image registries such as DockerHub, the RedHat Ecosystem Catalog, and the IBM Cloud. 2. Enterprise Images Custom images are created and built from base images and include enterprise-specific components, standard packages, or structures as part of enterprise security and governance. These images are then modified to meet certain standards for organization and published in private container registries for consumption by various application teams. Each image has an assigned owner responsible for managing the image's lifecycle. 3. Application Images These images are built using enterprise custom images as a base. Applications are added on top of them to build application images. These application images are further deployed as containers to container platforms. 4. Builder Images These images are primarily used in the CI/CD pipeline for compiling, building, and deploying application images. These images are based on enterprise custom images and include software required to build applications, create container images, perform testing, and finally, deploy images as part of the pipeline. 5. COTS Images These are vendor-provided images for vendor products. These are also called custom off-the-shelf (COTS) products managed by vendors. The lifecycle for these images is owned by vendors. For simplification, the image hierarchy is represented in the diagram below. Now that we understand various components of the container supply chain and container image hierarchy, let's understand how patching and vulnerability management are done for containers. Patching Container Images Most of the base images are provided by community members or vendors. Similar to traditional patches provided by vendors, image owners proactively patch these base images to mitigate security issues and make new versions available regularly in the container registries. Let's take an example of Python 3.11 Image from RedHat. RedHat patches this image regularly and also provides a Health Index based on scan results. RedHat proactively fixes vulnerabilities and publishes new versions post-testing. The image below indicates that the Python image is patched every 2-3 months, and corresponding CVEs are published by RedHat. This patching involves modifying the Containerfile to update required packages to fix vulnerabilities as well as building and publishing a new version (tag) of the image in the registry. Let’s move to the second level in the image hierarchy: Enterprise custom images. These images are created by organizations using base images (e.g., Python 3.11) to add enterprise-specific components to the image and harden it further for use within the organization. If the base image changes in the external registry, the enterprise custom image should be updated to use a newer version of the base image. This will create a new version of the Enterprise custom image using an updated Containerfile. The same workflow should be followed to update any of the downstream images, such as application and builder images that are built using Enterprise custom images. This way, the entire chain of images will be patched. In this entire process, the patching is done by updating the Containerfile and publishing new images to the image registry. As far as COTS images, the same process is followed by the vendor, and consumers of the images have to make sure new versions of images are being used in the organization. Vulnerability Management for Containers Patch management to secure containers is only half part of the process. Container images have to be scanned regularly or at a specific cadence to identify newly discovered CVEs within images. There are various scanning tools available in the market that scan container images as well as platforms to identify security gaps and provide visibility for such issues. These tools identify security gaps such as running images with root privileges, having directories world-writable, exposed secrets, exposed ports, vulnerable libraries, and many more. These vulnerability reports help organizations to understand the security postures of images being used as well as running containers in the platform. The reports also provide enough information to address these issues. Some of these tools also provide the ability to define policies and controls such that they can block running images if they violate policies defined by the organization. They could even stop running containers if that's what the organization decides to implement. As far as mitigating such vulnerabilities, the process involves the same steps mentioned in the patch management section; i.e., updating the Containerfile to create a new Docker image, rescanning the image to make sure reported vulnerabilities don’t exist anymore, testing the image and publish it to image registry. Depending on where the vulnerability exists in the hierarchy, the respective image and all downstream images need to be updated. Let’s look at an example. Below is the scan report from the python-3.11:1-34 image. It provides 2 important CVEs against 3 packages. These 2 CVEs will also be reported in all downstream images built based on the python-3.11:1-34 image. On further browsing CVE-2023-38545, more information is provided, including action required to remediate the CVE. It indicates that, based on the operating system within the corresponding image, the curl package should be upgraded in order to resolve the issue. From an organizational standpoint, to address this vulnerability, a new Dockerfile or Containerfile needs to be developed. This file should contain instructions to upgrade the curl package and generate a new image with a unique tag. Once the new image is created, it can be utilized in place of the previously affected image. As per the hierarchy mentioned in image-1, all downstream images should be updated with the new image in order to fix the reported CVE across all images. All images, including COTS images, should be regularly scanned. For COTS images, the organization should contact the vendor (image owner) to fix critical vulnerabilities. Shift Left Container Security Image scanning should be part of every stage in the supply chain pipeline. Detecting and addressing security issues early is crucial to avoid accumulating technical debt as we progress through the supply chain. The sooner we identify and rectify security vulnerabilities, the less disruptive they will be to our operations and the lower the amount of work required to fix them later. Local Scanning In order to build Docker images locally, developers need to have tools such as Docker and Podman installed locally on the workstation. Along with these tools, scanning tools should be made available so that developers can scan images pulled from external registries to determine if those images are safe to use. Also, once they build application images, they should have the ability to scan those images locally before moving to the next stage in the pipeline. Analyzing and fixing vulnerabilities at the source is a great way to minimize the security risks further in the lifecycle. Most of the tools provide a command line interface or IDE plugins for security tools for the ease of local scanning. Some organizations create image governance teams that pull, scan, and approve images from external registries before allowing them to be used within the organization. They take ownership of base images and manage the lifecycle of these images. They communicate with all stakeholders on the image updates and monitor new images being used by downstream consumers. This is a great way to maintain control of what images are being used within an organization. Build Time Scanning Integrate image scanning tools in the CI/CD pipeline during the image build stage to make sure every image is getting scanned. Performing image scans as soon as the image is built and determining if the image can be published to the image registry is a good approach to allowing only safe images in the image registry. Additional control gates can be introduced before the production use of the image by enforcing certain policies specifically for production images. Image Registry Scanning Build-time scanning is essentially an on-demand scanning of images. However, given that new vulnerabilities are constantly being reported and added to the Common Vulnerabilities and Exposures (CVE) database, images stored in the registry need to be scanned at regular intervals. Images with critical vulnerabilities have to be reported to the image owners to take action. Runtime Container Scanning This is real-time scanning of running containers within a platform to identify the security posture of containers. Along with analysis that's being done for images, runtime scan also determines additional issues such as the container running with root privileges, what ports it's listening on, if it's connected to the internet, and any runway process being executed. Based on the capability of the scanning tool, it provides full visibility and a security view of the entire container platform, including the hosts on which the platform is running. The tool could also enforce certain policies, such as blocking specific containers or images from running, identifying specific CVEs, and taking action. Note that this is the last stage in the container supply chain. Hence, fixing any issues at this stage is costlier than any other stage. Challenges With Container Security From the process standpoint, it looks straightforward to update base images with new versions and all downstream images. However, it comes with various challenges. Below are some of the common challenges you would encounter as you start looking into the process of patching and vulnerability management for containers: Identifying updates to any of the parent/base images in the hierarchy Identifying image hierarchy and impacted images in the supply chain Making sure all downstream images are updated when a new parent image is made available Defining ownership of images and identifying image owners Communication across various groups within the organization to ensure controls are being maintained Building a list of trusted images to be used within an organization and managing the lifecycle of the same Managing vendor images due to lack of control Managing release timelines at the same time as securing the pipeline Defining controls across the enterprise with respect to audit, security, and governance Defining exception processes to meet business needs Selecting the right scanning tool for the organization and integration with the supply chain Visibility of vulnerabilities across the organization; providing scan results post-scanning of images to respective stakeholders Patch Management and Containers Summarized This article talks about how important it is to keep things secure in container systems, especially by managing patches and dealing with vulnerabilities. Containers are like independent software units that are useful but need special security attention. Patch management means making sure everything is up to date, starting from basic images to more specific application and builder images. At the same time, vulnerability management involves regularly checking for potential security issues and fixing them, like updating files and creating new images. The idea of shifting left suggests including security checks at every step, from creating to running containers. Despite the benefits, there are challenges, such as communicating well in teams and handling images from external sources. This highlights the need for careful control and ongoing attention to keep organizations safe from cyber threats throughout the container process.
Yitaek Hwang
Software Engineer,
NYDIG
Emmanouil Gkatziouras
Cloud Architect,
egkatzioura.com
Marija Naumovska
Product Manager,
Microtica