Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Traffic Director and gRPC—proxyless services for your service mesh

Categories

Tags devops software-architecture kubernetes containers

Lots of organizations turn to service mesh because it solves tedious and complicated networking problems, especially in environments that make heavy use of microservices. It also allows them to manage application networking policies, like load balancing and traffic management policies, in a centralized place. By Stewart Reichling and Srini Polavarapu @Google.

But adopting a service mesh has traditionally meant (1) managing infrastructure (a control plane), and (2) running sidecar proxies (the data plane) that handle networking on behalf of your applications.

Traffic Director supports service mesh deployments that include both proxyless and proxy-based gRPC applications

Source: https://cloud.google.com/blog/products/networking/traffic-director-supports-proxyless-grpc

Gooogle built Traffic Director, a Google Cloud-managed control plane, to solve that first barrier to service mesh adoption—you shouldn’t need to manage yet another piece of infrastructure (the control plane). With Traffic Director support for proxyless gRPC services, you can bring proxyless gRPC applications to your proxy-based service mesh or even have a fully proxyless service mesh.

gRPC handles connection management, bidirectional streaming, and other critical networking functions. In short, it’s a great framework for building microservices-based applications.

The article then describes:

  • Traffic Director support for proxyless gRPC services
  • gRPC + xDS
  • Getting started with proxyless gRPC
  • When to deploy Traffic Director with proxyless gRPC services

Enterprise networks are heterogeneous. Google built Traffic Director to be flexible so that we can support deployment options that meet your needs. Excellent read!

[Read More]

Singleton design pattern in java

Categories

Tags java software-architecture programming

Singleton pattern is one of the most commonly used software design pattern. It comes under creational pattern. By Manoj Singh Saun.

The singleton pattern restricts the instantiation of a class to one “single” instance. It is useful when exactly one object is needed to coordinate actions across the system. e.g. database connection is a good example of singleton pattern.Creating database connection is a much heavier and more expensive job from a performance point of view.So It is better that a single connection is share by multiple objects.

The singleton pattern can be implemented in many ways. An implementation of the singleton pattern must follow the following points:

  • It should ensure that only one instance of the singleton class ever exists
  • It should provide global access to that instance

The article further describes and provides code examples for:

  • The classic way to create Singleton pattern (not thread safe)
  • Another example of Singleton using Eager Instantiation (thread safe)
  • Example of Singleton using synchronized (thread safe)
  • Example of Singleton using double checked locking (thread safe; the field needs to be volatile to provide consistency)

Straightforward with detailed code explanation for each method used. Great!

[Read More]

Top 5 threats to apis servicing mobile apps

Categories

Tags apis infosec web-development code-refactoring json restful

David Stewart put together this blog post about security threats to APIs. As mobile apps become increasingly paramount to operating successfully in today’s markets, a big question mark over API security is raised. Gartner has previously predicted that by 2022, “API abuses will be the most-frequent attack vector resulting in data breaches for enterprise web applications.” Since every mobile app out there is powered by APIs, securing them is clearly a top priority.

When it comes to APIs which service mobile apps, the trouble is that anyone – including attackers – can freely install an application on a device he/she controls to reverse engineer and study it for weaknesses.

The article the dives into:

  • MITM (man in the middle) attacks
  • Data scraping
  • Credential stuffing
  • App impersonation
  • DoS and DDoS attacks

APIs are a critical part of mobile apps and, as such, are increasingly becoming a target for hackers. Great read.

[Read More]

Mastering web components in Ionic 4

Categories

Tags app-development web-development nodejs javascript

In this series of posts we are going to go deeper on the new structure and core concepts of Ionic 4 and explore more advanced topics. Author also believes that the few structural changes that were made in Ionic 4 are a big win for the framework. By Agustin Haller.

The big thing about web components is encapsulation, this has a lot of benefits, but also enforce you to follow a more strict interface, leaving behind the anarchy and flexibility of Ionic 3 non web component elements.

The article then dives deep into Ionic and describes:

  • A big step forward towards the future of the web: Web Components
  • Styling Ionic 4 components
  • Customizing Ionic 4 components
  • Getting started with Stencil
  • Creating a Web Component with Stencil
  • Building a multi-color SVG icon web component

Plenty of code examples with detailed explanation also provided. And they also released Ionic 5 full starter app? It’s an ionic 5 template that you can use to jump start your Ionic app development and save yourself hundreds of hours of design and development. Super exciting!

[Read More]

REST API with Elixir/Phoenix - beginner's tutorial

Categories

Tags apis web-development code-refactoring json restful erlang elixir

For this tutorial, we are going to write a simple Books REST API with database persistence using PostgreSQL. The requirements are to have a single endpoint on /api/books that allows CRUD operations over the books resource. By Dairon Medina Caro.

This step by step tutorial then explains:

  • Prerequisites
  • Getting started
  • Creating the Phoenix project
  • Setting up the database
  • Modelling our data
  • Generating the REST endpoints
  • Adding the routes
  • Running the App

This was all about the Phoenix REST API Tutorial, while there seems to be a lot of witchcraft and generator magic, all it does is automate the generation of boring CRUD so you can focus on your important business logic, all the generated code is very explicit and can be modified to your needs and code style. Link to the GitHub repo provided together with moore resources for anybody interested in learning Erlang and Phoenix. Nice one!

[Read More]

🚀 Visualizing memory management in JVM(Java, Kotlin, Scala, Groovy, Clojure)

Categories

Tags java scala jvm performance

In this multi-part series, author aims to demystify the concepts behind memory management and take a deeper look at memory management in some of the modern programming languages. By Deepu K Sasidharan; JHipster co-lead, Java, JS, Cloud Native Advocate, Developer Advocate @ Adyen, Author, Speaker, Software craftsman.

The series should give you some insights into what is happening under the hood of these languages in terms of memory management. In this chapter, we will look at the memory management of the Java Virtual Machine(JVM) used by languages like Java, Kotlin, Scala, Clojure, JRuby and so on.

Heap memory allocation

Source: https://deepu.tech/memory-management-in-jvm/

The article then focuses on:

  • JVM memory structure
  • Heap memory
  • Thread stacks
  • Meta space
  • Code cache
  • JVM memory usage (Stack vs Heap)
  • JVM Memory management: Garbage collection
  • Mark & Sweep Garbage collection

… and more. JVM manages the heap memory by garbage collection. In simple terms, it frees the memory used by orphan objects, i.e, objects that are no longer referenced from the Stack directly or indirectly(via a reference in another object) to make space for new object creation.

Slide presentation is always provided for your benefit, together with code examples. Refreshing read!

[Read More]

Human inside: How capabilities can unleash business performance

Categories

Tags management cio agile learning teams

Companies need human capabilities more than ever. What can organizations do about it? As business pressures only increase, organizations need to help develop workers’ human capabilities—curiosity, imagination, creativity, empathy, and courage—and encourage their application across all levels and departments. By John Hagel, Cochairman, Deloitte Center for the Edge.

In the future of work, a paradox is becoming increasingly apparent and important: The more advanced and pervasive technology becomes, the more important humans are to the equation—humans as customers, humans as buyers, humans as engines of growth and innovation, humans as users, collaborators, and stakeholders. And leaders are seeing fresh importance in the ways in which organizations deploy and develop their people to create new value and navigate increasing ambiguity.

Why aren’t companies more focused on developing and making use of the human capabilities in their organizations? Why is this still an unrealized opportunity?

Innate human capabilities broaden the horizon

Source: @Deloitte analysis https://www2.deloitte.com/us/en/insights/focus/technology-and-the-future-of-work/building-capability-unleash-business-performance.html

The article deals with the following in great deal:

  • An untapped opportunity
  • The business benefits
  • How to approach cultivating capabilities
  • Why do capabilities matter?
  • Cultivating human capabilities offers tangible business benefits
  • Myths and misconceptions
  • How to cultivate capabilities throughout the organization

.. and much more. Tons of good advice and pointers how innovation, transformation, and leadership occur in many ways.

We liked this:

Human capability: Attributes that are demonstrated independent of context. Capabilities have value and applicability across different outcomes, sectors, and domains; they do not become obsolete.

Skill: The tactical knowledge or expertise needed to achieve work outcomes within a specific context. Skills are specific to a particular function, tool, or outcome, and they are applied by an individual to accomplish a given task.

Excellent read, very insightful for anybody in a leadership role!

[Read More]

Centralize your automation logs with Ansible Tower and Splunk Enterprise

Categories

Tags python ansible devops analytics big-data

For many IT teams, automation is a core component these days. But automation is not something on it’s own - it is a part of a puzzle and needs to interact with the surrounding IT. By Leonardo Araujo.

The Red Hat Ansible Automation Platform is a solution to build and operate automation at scale. As part of the platform, Ansible Tower integrates well with external logging solutions, such as Splunk, and it is easy to set that up. In this blog post we will demonstrate how to perform the necessary configurations in both Splunk and Ansible Tower to let them work well together. Splunk is data platform which enables you to bring data to every question, decision and action.

This tutorial then describes in detail:

  • Setup of Splunk
  • Configuring Data Input with Red Hat Ansible Content Collections
  • Validating Data Input
  • Configuring Ansible Tower
  • Viewing the logs in Splunk
  • Creating a simple dashboard

In this post, author demonstrates how to send the Ansible Tower usage logs to Splunk to enable a centralized view of all events generated by Ansible Tower. That way we can create graphs from various information, such as the number of playbooks that failed or succeeded, modules most used in the executed playbooks and so on. Plenty of screen shots and all the code for playbook available. Superb!

[Read More]

Mastering AWS Kinesis data streams

Categories

Tags software-architecture event-driven messaging big-data cio data-science code-refactoring

An article by Anahit Pogosova in which she describes how she has been working with AWS Kinesis Data Streams for several years, dealing with over 0.5TB of streaming data per day. Rather than telling you about all the reasons why you should use Kinesis Data Streams (plenty is written on that subject), she will talk about the things you should know when working with the service.

One thing about Kinesis Streams that makes it a very powerful tool, in addition to its nearly endless scalability, is that you can attach custom data consumers to it to process and handle data in any way you prefer, in near real-time.

After writing it to a stream, data is available to read within milliseconds and is safely stored in the stream for at least 24 hours, during which you can “replay” the data as many times as you want. You can increase that time even further, to up to 7 days, but you will be charged extra for any time over 24h.

The article then reads about:

  • Shards
  • Shards and Partition Keys
  • Serverless?
  • Writing to a stream
  • AWS SDK
  • Batch operations
  • Failures
  • Partial failures
  • Pricing

The main cause for these kinds of failures is exceeding the throughput of a stream or an individual shard. The most common reasons for that can be really tricky to fix. They are traffic spikes and network latencies. Both of them may cause records to arrive to the stream unevenly and cause sudden spikes in throughput. Plenty of code examples, links to further reading and charts explaining concepts. Excellent read!

[Read More]

Introduction to Machine Learning K-Nearest Neighbors (KNN) algorithm in Python

Categories

Tags machine-learning big-data data-science fintech python

Machine Learning is one of the most popular approaches in Artificial Intelligence. Over the past decade, Machine Learning has become one of the integral parts of our life. It is implemented in a task as simple as recognizing human handwriting or as complex as self-driving cars. By Vibhu Singh.

In this blog, we will give you an overview of the K-Nearest Neighbors (KNN) algorithm and understand the step by step implementation of trading strategy using K-Nearest Neighbors in Python.

K-Nearest Neighbors (KNN) is one of the simplest algorithms used in Machine Learning for regression and classification problem. KNN algorithms use data and classify new data points based on similarity measures (e.g. distance function). Classification is done by a majority vote to its neighbors. The data is assigned to the class which has the nearest neighbors. As you increase the number of nearest neighbors, the value of k, accuracy might increase.

The article is split into:

  • Import the libraries
  • Fetch the data - the S&P 500 data from Yahoo finance
  • Define predictor variable
  • Define target variables
  • Split the dataset
  • Instantiate KNN model
  • Create trading strategy using the model
  • Sharpe Ratio

Now that you know how to implement the KNN Algorithm in Python, you can start to learn how logistic regression works in machine learning and how you can implement the same to predict stock price movement in Python. Nice one!

[Read More]