Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Best practices, tools, and approaches for Kubernetes monitoring

Categories

Tags event-driven app-development cloud monitoring kubernetes containers devops

Let’s look at some of the available Kubernetes monitoring and Kubernetes logging tools, including Prometheus for monitoring and Grafana for visualization and dashboards. By Kyle Hunter.

Kubernetes monitoring is the process of gathering metrics from the Kubernetes clusters you operate to identify critical events and ensure that all hardware, software, and applications are operating as expected. Aggregating metrics in a central location will help you understand and protect the health of your entire Kubernetes fleet and the applications and services running on it.

There are a variety of popular tools that can enhance your Kubernetes container monitoring efforts. Some of the most common ones include:

  • Prometheus: An open-source event monitoring and alerting tool that collects and stores metrics as time series data. Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project after Kubernetes.
  • Grafana: A fully managed visualization platform for applications and infrastructure that works with monitoring software such as Prometheus. Grafana provides capabilities to collect, store, visualize, and alert on data.
  • Thanos: A metric system that provides a simple and cost-effective way to centralize and scale Prometheus-based monitoring systems.
  • Elasticsearch: A distributed, JSON-based search and analytics engine.
  • Logstash: An open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite stash.
  • Kibana: A data visualization and exploration tool used for log and time series analytics, application monitoring, and operational intelligence use cases.

Many teams use these monitoring and logging tools alone or in combination to create their own solutions and address specific container monitoring and Kubernetes application monitoring needs. Good read!

[Read More]

How to develop Event-Driven architectures

Categories

Tags event-driven web-development app-development messaging java software-architecture

Author is going to look at how we can use Open source Chronicle Queue and Chronicle Wire to structure applications to use Event-Driven Architecture (EDA). EDA is a design pattern in which decoupled components (often microservices) can asynchronously publish and subscribe to events. By Rob Austin.

At a high level, event-driven architectures are usually made up of application components connected via an async message system. The events flow as messages between the application components and these components act independently as they don’t need to know about other components. All a component needs to know is how to process incoming messages and how to send messages upon the completion of business logic. In other words, Event-Driven Architectures are basically fire and forget.

package net.openhft.chronicle.wire.examples;

import net.openhft.chronicle.wire.JSONWire;
import net.openhft.chronicle.wire.Wire;

public class WireExamples4 {

   interface Printer {
       void print(String message);
   }

   public static void main(String[] args) {
       Wire wire = new JSONWire();
       final Printer printer = wire.methodWriter(Printer.class);
       printer.print("hello world");

       wire.methodReader((Printer) System.out::println).readOne();
   }
}

When It comes to choosing the message bus, if the components are written in Java then the open source library Chronicle Queue can be selected. This is especially beneficial if these components are on the same host, as Chronicle Queue is a point-to-point messaging layer, which works by writing your events to shared off-heap memory. Chronicle Queue is able to offer messaging performance characteristics that are close to reading and writing from RAM. There is also an extension to Chronicle Queue that allows it to send replicated messages over the network, which is required to support HA/DR and server failover.

Author then goes and create example app via which he demonstrates how we can start building up an event-driven solution using Chronicle Wire and Chronicle Queue by looking at a very simple example which demonstrates how we can construct a Java method call. Nice tutorial!

[Read More]

How TypeScript design patterns help you write better code

Categories

Tags react javascript web-development app-development

TypeScript is a language that has seen a lot of exposure in the modern world of software engineering. Its powerful, strict type system reduces the likelihood of running into errors during runtime or hidden bugs in production due to the lack of strong types in, JavaScript, TypeScript’s predecessor. By Eslam Hefnawy.

While TypeScript makes the development experience smooth on its own, it’s important to know how a design pattern can help retain or improve the efficiency and usefulness of a codebase. Using the right design pattern in the right situation and knowing how to do so is a crucial skill in any developer’s toolbox.

The article main content is about following patterns:

  • The observer pattern: I know what happened to you
  • The builder pattern: Few subclasses, few problems
  • The proxy pattern: The ideal middleman
  • Use the right design pattern in the right situation

Most importantly, design patterns are language-agnostic and can be implemented in any language to solve the kind of problem that a particular design pattern intends to solve. Good read!

[Read More]

An introduction to event-driven architecture

Categories

Tags miscellaneous cio event-driven messaging app-development programming

What is an Event-driven Architecture?. This concept of event-driven architecture (EDA) refers to the new message processing method, which interacts through generation and consumption events, and is generally implemented through a publish-subscribe model? By JIN.

Also, the EDA has not a deterministic response time for processing input events. Event notifications imply the change in the state of the system, which can be triggered by the event input.

The traditional request-driven model and the EDA are complementary. The traditional request-driven model depends on the trust between the workers themselves. Request = command, Event = trigger.

EDA comprises 4 modes

  • Event notification
  • Event carried state transfer
  • Event sourcing
  • Command Query Responsibility Segregation, CQRS

There are several core aspects to understand: Sending and subscribing need to use, for example, Kafka, RocketMQ, RabbitMQ, etc as a message queue service. The message event stream needs to form a stream library connection structure with the database. For example, during the message processing, the database must support high-strength random search of real-time streams, find out the data related to the order and pass the right message to notify downstream service. Event Stream Processing flow will go through 4 steps (collect, enhance, analyze and dispatch) to generate an output event. (Apache Flink, Amazon Kinesis, Azure stream analytics). Event Stream Processing flow will go through 4 steps (collect, enhance, analyze and dispatch) to generate an output event. (Apache Flink, Amazon Kinesis, Azure stream analytics). And so much more to learn in this article!

[Read More]

How to build low latency crypto trading systems using Java and Chronicle services

Categories

Tags miscellaneous jvm java app-development programming

Cryptocurrency trading is an emerging market with its own rules. However, when it comes to the need for low-latency arbitrage, that is, being able to react rapidly to changing market prices and placing orders ahead of the competition, there are lessons we can learn from optimizing classic trading systems. By Ivan Rakov.

In this article we’ll take a look at the techniques that the Binance exchange uses to provide market data to its customers, but the techniques and patterns used by this exchange are similar to many other exchanges as well. Exchange updates are provided by a websocket connection, typically over multiple channels with differing latency characteristics.

The article then pays attention to best practices and how to go about implementing efficient market data connector:

  • Allocation-free parsing
  • Determinism and reproducibility
  • Zero-cost events serialization
  • Low latency oriented and flexible threading model
  • Resulting architecture of crypto market data connectivity layer

A classic way of parsing a JSON is delegating it to a library, which returns a structured representation like JSONObject. This representation is usually an object tree that will be disposed of after the parsing is complete. Continuous creation of temporary objects causes extra pressure on the JVM garbage collector, which in turn may result in unpredictable latency spikes. Read full article to learn more!

[Read More]

Handling multiline logs with Loki and Fluent Bit on Kubernetes

Categories

Tags kubernetes java web-development devops

Logging is one of the core parts of monitoring in an application life cycle. Fluent Bit is an open-source project that allows log processing and forwarding. In this post, I will point out some useful hin that I have found during configuring the fluent bit for our environment. Loki is another tool from Grafana used for log aggregations. By @cleancloud-k8s.com.

In this tutorial you will learn how to install following stack:

  • Prometheus de facto metrics collection in monitoring landspace
  • Grafana a genius visualization plaform(optinal)
  • Loki is the log aggregation system
  • Fluent-bit log Processor and forwarder

The focus of this post is the Fluent-Bit so it worth writing a few sentences more about. It is written in C language thus makes extremely performant and since an open-source project is, it is constantly developed and maintained. For further information go and have a look at its web page. You will get a step by step tutorial how to install all the above together with configuration and deployment yaml files for kubernetes and helm. Nice one!

[Read More]

Dependency injection in JavaScript: Write testable code easily

Categories

Tags nodejs javascript web-development frontend tdd

I struggled with two aspects of software development as a junior engineer: structuring large codebases and writing testable code. Test-driven development is such a common technique that is often taken for granted, but it’s not always clear how code can be made fully testable. By Nate Anderson.

This article shares a few powerful tools to help you write testable code that grows into neat, manageable code bases:

  • What is a dependency?
  • SOLID principles
  • Single responsibility principle
  • Dependency inversion principle
  • Example: An overwhelmed express handler for Node.js
  • Layered architecture for separation of concerns in JavaScript
  • Separation of concerns: An example
  • Mocking on the Fly with Jest
  • Benefits of mocking

The dependency inversion principle encourages us to depend on abstractions instead of concretions. This, too, has to do with separation of concerns. Observing the single responsibility principle means that we only unit test the one purpose a unit of code fulfills. In this post, we’ve taken a concrete example of an overwhelmed function and replaced it with a composition of smaller, testable units of code. Even if we accomplish identical lines-of-code test coverage for both versions, we can know exactly what broke and why when tests fail in the new version. Nice one!

[Read More]

Top 10 Angular best practices to improve your Angular app

Categories

Tags angular web-development learning frontend css google

Despite all the compelling features offered by Angular, if you overlook your coding practices for the Angular code, there are chances of facing performance issues. By Archita Nayak.

Maintaining your code is very important, not only for Angular but any other framework, because it directly affects the output, and you surely don’t want a clumsy web app. In this blog – we will discuss Angular best practices to improve your Angular app performance instantly. Get knowledgeable on how to refine your Angular code with a clean code checklist in Angular.

The article then names these best practices to improve your Angular application performance:

  • Use Angular CLI
  • Make use of trackBy
  • Try avoiding the use of logic in the component
  • Use of lazy loading
  • Prevent memory leaks
  • Declaring variable types rather than using any
  • Angular coding practices

… and more. You should try to develop coding practices in any technology or framework, which enhance the performance, readability, and understandability of the application. Best applications are developed when developers focus on how to code and what not to code. Good read!

[Read More]

Apache Kafka and MQTT - Overview and Comparison

Categories

Tags devops cloud learning messaging

Apache Kafka and MQTT are a perfect combination for many IoT use cases. This blog series covers various use cases across industries including connected vehicles, manufacturing, mobility services, and smart city. This is part 1: Overview + Comparison. By Kai Waehner.

MQTT is an open standard for a publish/subscribe messaging protocol. Open source and commercial solutions provide implementations of different MQTT standard version. MQTT was built for IoT use cases, including constrained devices and unreliable networks. However, it was not built for data integration and data processing.

The blog post then explains:

  • Apache Kafka vs. MQTT
  • When (not) to use MQTT?
  • When (not) to use Apache Kafka?
  • Kafka + MQTT = Match Made in Heaven
  • Example: Predictive maintenance with 100,000 connected cars

In conclusion, Apache Kafka and MQTT are a perfect combination for many IoT use cases. Follow the blog series to learn about use cases such as connected vehicles, manufacturing, mobility services, and smart city. Every blog post also includes real-world deployments from companies across industries. It is key to understand the different architectural options to make the right choice for your project. Good read!

[Read More]

4 reasons you need automation in integration

Categories

Tags cio ibm cloud learning

Learn how AI-powered Automation can transform the integration lifecycle and why it makes sense to deploy it in your own organization. By IBM Cloud Education.

AI-powered Automation brings an innovative approach to integration, increasing the speed and lowering the costs of integration projects. Though there are many reasons to automate your integrations, we’ll cover the top four.

The article then covers following topics:

  • What is automation in integration?
  • Reason 1: Accelerate integration development
  • Reason 2: Boost integration quality
  • Reason 3: Increase efficiency and reduce costs
  • Reason 4: Ensure security, governance and availability

Automated integration enables organizations to update systems of record with integrity and at scale. The latest tools include protection for data at rest and in motion, which is often a regulatory requirement. Resiliency features and auto-scaling functionality help ensure that backend systems can manage workloads without costly and disruptive changes. Organizations can also identify deployment, operations and security issues as they happen, providing data to feed AI for future best practices and asset protection. Nice one!

[Read More]