Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Build a secure e-commerce app with SuperTokens and Hasura GraphQL

Categories

Tags app-development infosec web-development nosql apis nosql

This tutorial will show you how to develop a secure e-commerce store using SuperTokens authentication in a React.js app. We’ll use a modern stack that includes React, Hasura GraphQL, and SuperTokens. By Ankur Tyagi.

SuperTokens is an open-source AuthO alternative that allows you to set up authentication in less than 30 minutes.

The source code for the App we’re working on is available to view in this GitHub repo. SuperTokens provides authentication, and Hasura exposes a single GraphQL endpoint that you use on the frontend to send GraphQL queries and access data. Because it is a public API by default, SuperTokens will make it secure or private.

The article then dives into:

  • What is SuperTokens?
  • Why use SuperTokens?
  • What is Hasura?
  • Create a managed service with SuperTokens

The article contains all the code needed for React frontend and database implementation together with screengrabs for better understanding. You will integrate SuperTokens with Hasura. Tokens generated from SuperTokens will be sent from the UI side in request headers to Hasura, where they will be validated. Nice one!

[Read More]

Archive DynamoDB items to S3 automatically

Categories

Tags aws serverless learning nosql database

Storing data like JSON logs in DynamoDB is a great idea as DynamoDB is very scalable. In addition, it is easier to transfer data into a DynamoDB table using for example Lambda and AWS SDK. Also, it makes analyzing the logs easier for example the AWS Console offers great filtering options to search for specific so-called table items. By Martin Mueller.

This all sounds very good but there is one hitch and that is the cost. As the number of items increases, so does the cost. So it would be advisable to delete the DynamoDB data from the table after a certain time, e.g. 30 days, and import it into an S3. The costs for S3 are much lower and it would even be possible to reduce them if you use a cheaper S3 tier like Glacier.

DynamoDB Streams invokes a Lambda, which writes the deleted item away to S3. In author’s example, the DynamoDB items are JSON logs with few properties. In your case, the DynamoDB item can look different. You can find the code in this GitHub repo. But the basic concept should still the same!

What is pretty cool is that DynamoDB Streams provides a batching feature. The Lambda can then process the deleted items as a batch. That reduces the number of Lambda calls and therefore the costs. The default DynamoDB default batching is not quite ideal for our use case here so author used AWS Console and the Lambda call metrics to optimize it. The batchSize with 10000 and the maxBatchingWindow are chosen maximally to call a Lambda really only every 5 minutes. Nice one!

[Read More]

Easily debug Salesforce using Nebula logger

Categories

Tags miscellaneous cio learning big-data

There is a regular requirement and a big use case for both admins and developers, as they need to easily debug and surface issues in an app that is accessible for end users. Robust logging functionality is an essential part of the puzzle – even better if it can be customized to meet the needs of your specific organisation. By Atlas Can.

For anyone implementing or supporting an application, you’ve probably realized that no system is perfect. At some point, errors or issues can (and likely will) occur – this could be due to bugs, user errors, or issues with external integrations, among other things.

The Salesforce platform provides some out-of-the-box functionality for basic logging. This is available to admins via the Salesforce Developer Console, and provides information about exceptions, database operations, callouts, and more. However, there are some major considerations when using the Salesforce platform. These limits can make it difficult to rely solely on Salesforce’s logging to troubleshoot issues, especially if it’s an inconsistent issue, or if the user does not know how to recreate the issue.

Logging is an important part of the day-to-day functioning of a Salesforce org. Most enterprise systems use event monitoring and have a logging framework designed for their specific use. However, you can use Nebula Logger to quickly solve your logging needs – it’s customizable and built around best practices. You can also add additional functionality on your own. Good read!

[Read More]

Optimizing AWS Lambda function performance for Java

Categories

Tags apis performance app-development serverless web-development java

This blog post shows how to optimize the performance of AWS Lambda functions written in Java, without altering any of the function code. It shows how Java virtual machine (JVM) settings affect the startup time and performance. You also learn how you can benchmark your applications to test these changes. By Benjamin Smith and Mark Sailes.

JVM Lambda lifecycle

Source: https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/

When a Lambda function is invoked for the first time, or when Lambda is horizontally scaling to handle additional requests, an execution environment is created. The first phase in the execution environment’s lifecycle is initialization (Init). For Java managed runtimes, a new JVM is started and your application code is loaded. This is called a cold start. Subsequent requests then reuse this execution environment. This means that the Init phase does not need to run again. The JVM will already be started. This is called a warm start.

The article then explains:

  • How can you improve cold start latency?
  • Language-specific environment variables
  • Customer facing APIs
  • Applying the optimization
  • Other use cases

Changing the tiered compilation level can help you to reduce cold start latency. By setting the tiered compilation level to 1, the JVM uses the C1 compiler. This compiler quickly produces optimized native code but it does not generate any profiling data and never uses the C2 compiler. Tiered compilation is a feature of the Java virtual machine (JVM). It allows the JVM to make best use of both of the just-in-time (JIT) compilers. The C1 compiler is optimized for fast start-up time. The C2 compiler is optimized for the best overall performance but uses more memory and takes a longer time to achieve it.

In this post, you learn how to improve Lambda cold start performance by up to 60% for functions running the Java runtime. Thanks to the recent changes in the Java execution environment, you can implement these optimizations by adding a single environment variable. You can explore the code for this example in the GitHub repo. Excellent read!

[Read More]

An introduction to generics in Golang

Categories

Tags apis app-development cloud programming web-development golang

The Go 1.18 release adds support for generics. Generics are the biggest change we’ve made to Go since the first open source release. By Robert Griesemer and Ian Lance Taylor.

In this article we’ll introduce the new language features. We won’t try to cover all the details, but we will hit all the important points. Generics are a way of writing code that is independent of the specific types being used. Functions and types may now be written to use any of a set of types.

Generics adds three new big things to the language:

  • Type parameters for function and types
  • Defining interface types as sets of types, including types that don’t have methods
  • Type inference, which permits omitting type arguments in many cases when calling a function

Functions and types are now permitted to have type parameters. A type parameter list looks like an ordinary parameter list, except that it uses square brackets instead of parentheses.

We can make a function generic - make it work for different types - by adding a type parameter list. In this example we add a type parameter list with a single type parameter T, and replace the uses of float64 with T.

import "golang.org/x/exp/constraints"

func GMin[T constraints.Ordered](x, y T) T {
    if x < y {
        return x
    }
    return y
}

Generics are a big new language feature in 1.18. These new language changes required a large amount of new code that has not had significant testing in production settings. That will only happen as more people write and use generic code. You will also find video of talk at GopherCon 2021 in the article. Good read!

[Read More]

Pedal to the metal with PlanetScale and Rust

Categories

Tags app-development cloud programming database

We wanted our technology choices to support our mission: a greener modern world, supported by a cleaner modern web. To do that, we decided to take a chance on two novel inventions: Rust for our serverless back-end, and PlanetScale for an enterprise-quality database. By Thomas.

Rust is a memory-safe systems programming language that has secured the title of most beloved language on StackOverflow’s annual developer survey for the past six years running. It’s not just hype. Programs written in Rust can run as fast as the processor, with no VM or garbage collector taking up cycles. Then there’s PlanetScale, which provides small teams with exceptional cloud database features (like sharding and size-balancing) under a progressive pricing structure.

For the purpose of this guide, we’ll be developing a profile API, or org-service … using PlanetScale, Rust, the Rocket web-framework, and the Diesel ORM. You can view the finished project on GitHub.

The article then covers step by step:

  • Creating a database
  • Our first migration
  • Building the API service
  • Deploying the API service

All the steps are accompanied by the well explained code and you will also learn to package the service into a Docker image. From there, you could easily deploy it to Google Cloud Run (or AWS, or Azure)! Nice one!

[Read More]

Best practices, tools, and approaches for Kubernetes monitoring

Categories

Tags event-driven app-development cloud monitoring kubernetes containers devops

Let’s look at some of the available Kubernetes monitoring and Kubernetes logging tools, including Prometheus for monitoring and Grafana for visualization and dashboards. By Kyle Hunter.

Kubernetes monitoring is the process of gathering metrics from the Kubernetes clusters you operate to identify critical events and ensure that all hardware, software, and applications are operating as expected. Aggregating metrics in a central location will help you understand and protect the health of your entire Kubernetes fleet and the applications and services running on it.

There are a variety of popular tools that can enhance your Kubernetes container monitoring efforts. Some of the most common ones include:

  • Prometheus: An open-source event monitoring and alerting tool that collects and stores metrics as time series data. Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project after Kubernetes.
  • Grafana: A fully managed visualization platform for applications and infrastructure that works with monitoring software such as Prometheus. Grafana provides capabilities to collect, store, visualize, and alert on data.
  • Thanos: A metric system that provides a simple and cost-effective way to centralize and scale Prometheus-based monitoring systems.
  • Elasticsearch: A distributed, JSON-based search and analytics engine.
  • Logstash: An open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite stash.
  • Kibana: A data visualization and exploration tool used for log and time series analytics, application monitoring, and operational intelligence use cases.

Many teams use these monitoring and logging tools alone or in combination to create their own solutions and address specific container monitoring and Kubernetes application monitoring needs. Good read!

[Read More]

How to develop Event-Driven architectures

Categories

Tags event-driven web-development app-development messaging java software-architecture

Author is going to look at how we can use Open source Chronicle Queue and Chronicle Wire to structure applications to use Event-Driven Architecture (EDA). EDA is a design pattern in which decoupled components (often microservices) can asynchronously publish and subscribe to events. By Rob Austin.

At a high level, event-driven architectures are usually made up of application components connected via an async message system. The events flow as messages between the application components and these components act independently as they don’t need to know about other components. All a component needs to know is how to process incoming messages and how to send messages upon the completion of business logic. In other words, Event-Driven Architectures are basically fire and forget.

package net.openhft.chronicle.wire.examples;

import net.openhft.chronicle.wire.JSONWire;
import net.openhft.chronicle.wire.Wire;

public class WireExamples4 {

   interface Printer {
       void print(String message);
   }

   public static void main(String[] args) {
       Wire wire = new JSONWire();
       final Printer printer = wire.methodWriter(Printer.class);
       printer.print("hello world");

       wire.methodReader((Printer) System.out::println).readOne();
   }
}

When It comes to choosing the message bus, if the components are written in Java then the open source library Chronicle Queue can be selected. This is especially beneficial if these components are on the same host, as Chronicle Queue is a point-to-point messaging layer, which works by writing your events to shared off-heap memory. Chronicle Queue is able to offer messaging performance characteristics that are close to reading and writing from RAM. There is also an extension to Chronicle Queue that allows it to send replicated messages over the network, which is required to support HA/DR and server failover.

Author then goes and create example app via which he demonstrates how we can start building up an event-driven solution using Chronicle Wire and Chronicle Queue by looking at a very simple example which demonstrates how we can construct a Java method call. Nice tutorial!

[Read More]

How TypeScript design patterns help you write better code

Categories

Tags react javascript web-development app-development

TypeScript is a language that has seen a lot of exposure in the modern world of software engineering. Its powerful, strict type system reduces the likelihood of running into errors during runtime or hidden bugs in production due to the lack of strong types in, JavaScript, TypeScript’s predecessor. By Eslam Hefnawy.

While TypeScript makes the development experience smooth on its own, it’s important to know how a design pattern can help retain or improve the efficiency and usefulness of a codebase. Using the right design pattern in the right situation and knowing how to do so is a crucial skill in any developer’s toolbox.

The article main content is about following patterns:

  • The observer pattern: I know what happened to you
  • The builder pattern: Few subclasses, few problems
  • The proxy pattern: The ideal middleman
  • Use the right design pattern in the right situation

Most importantly, design patterns are language-agnostic and can be implemented in any language to solve the kind of problem that a particular design pattern intends to solve. Good read!

[Read More]

An introduction to event-driven architecture

Categories

Tags miscellaneous cio event-driven messaging app-development programming

What is an Event-driven Architecture?. This concept of event-driven architecture (EDA) refers to the new message processing method, which interacts through generation and consumption events, and is generally implemented through a publish-subscribe model? By JIN.

Also, the EDA has not a deterministic response time for processing input events. Event notifications imply the change in the state of the system, which can be triggered by the event input.

The traditional request-driven model and the EDA are complementary. The traditional request-driven model depends on the trust between the workers themselves. Request = command, Event = trigger.

EDA comprises 4 modes

  • Event notification
  • Event carried state transfer
  • Event sourcing
  • Command Query Responsibility Segregation, CQRS

There are several core aspects to understand: Sending and subscribing need to use, for example, Kafka, RocketMQ, RabbitMQ, etc as a message queue service. The message event stream needs to form a stream library connection structure with the database. For example, during the message processing, the database must support high-strength random search of real-time streams, find out the data related to the order and pass the right message to notify downstream service. Event Stream Processing flow will go through 4 steps (collect, enhance, analyze and dispatch) to generate an output event. (Apache Flink, Amazon Kinesis, Azure stream analytics). Event Stream Processing flow will go through 4 steps (collect, enhance, analyze and dispatch) to generate an output event. (Apache Flink, Amazon Kinesis, Azure stream analytics). And so much more to learn in this article!

[Read More]