Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

A deeper dive into WebAssembly, the new executable format for the web

Categories

Tags json web-development app-development nginx javascript

Author recently spoke with some industry experts about three technologies they predict will be the Next Big Things. One of the 3 in particular deserves a more detailed look: WebAssembly (often abbreviated as Wasm). Wasm has caught the interest of many because it extends the language support for browsers beyond JavaScript. By Dave McAllister of F5.

No, it’s not a replacement for JavaScript; rather, it’s the fourth and newest language accepted by the World Wide Web Consortium (W3C) as an official web standard (along with HTML, CSS, and JavaScript).

Back in 2015, Mozilla started work on a new standard to define a “a portable, size‑ and load-time-efficient format and execution model” as a compilation target for web browsers. WebAssembly basically was designed to allow languages other than JavaScript to run within the browser. And Wasm quickly caught on with browser vendors, with all the major browsers supporting it.

Why should you care about WebAssembly?

  • Speed/performance
  • Size
  • Cross‑platform
  • Multi‑lingual
  • Security

If you have to run untrusted code in your browser, it must be isolated. Wasm achieves isolation with memory‑safe sandboxed execution environments. The current implementation isn’t perfect, but Wasm contributors are heavily focused on it, so I expect rapid improvement. You will get links to further reading, youtube video and resources to get you started with Wasm. Good read!

[Read More]

How to add Playwright tests to your pull request CI with GitHub Actions

Categories

Tags tdd nodejs web-development app-development

If you’re like me, you really appreciate a test automation step as part of your pull request (PR) CI for that added confidence before merging code. I want to show you how to add Playwright tests to your PRs and how to tie it all together with a GitHub Actions CI workflow. By Liran Tal.

import { test, expect } from '@playwright/test';

test('page should have title of "Dogs security blog"', async ({ page }) => {
  await page.goto('http://localhost:3000/');
  const title = await page.title();
  expect(title).toBe("Dogs security blog");
});

If you’ve never come across Playwright before, the Playwright test automation framework has had its first release in 2017, but has recently grown in popularity as another one of Microsoft’s developer tooling (in addition to Visual Studio Code and others). The Playwright test automation framework is a great way to easily write end-to-end (E2E) tests and also target cross-browser compatibility. I’ve used both Selenium and Cypress in the past, and if you’ve had any similar experience, then Playwright will surely remind you of the latter. It’s easy to get started, easy to write tests, and has built-in measures to ensure tests aren’t flaky.

In this article you will learn:

  • The basics of how to write end-to-end tests with Playwright
  • How to run Playwright tests in your GitHub Actions CI
  • How to run Playwright tests for your deployed Netlify preview URLs
  • How to preserve Playwright debug traces and make them available as build artifacts in GitHub Actions CI

Unlike Cypress, another test automation tool, which injects itself as a library to the web page DOM as its primary architecture to control the browser, Playwright uses native browser APIs to control the automation. Very interesting!

[Read More]

Scala Toolkit makes Scala powerful straight out of the box

Categories

Tags scala akka data-science java app-development

Scala Toolkit is an ongoing effort by Scala Center and VirtusLab to compose a set of approachable libraries to solve everyday problems. These libraries will be made easily accessible as a precomposed package. This package will be available for each Scala release. By Szymon Rodziewicz.

We constructed the main measures from our adjusted variation of the Cognitive Dimensions Framework, where we looked at the cognitive cost and time required in interactions with the library. Alongside that, we measured a set of practical aspects of each library, including tests, responsiveness and availability of the maintainers; documentation; popularity; dependencies; dependencies stability; small size; API stability; versioning schema; and cross-platform support.

Note that Scala Toolkit does not intend to create libraries of its own. The goal is to work with the maintainers of existing, battle-tested libraries. Neither will we promote these libraries as universal solutions. We plan to allow users to use them easily while acknowledging that other libraries will be better suited in certain situations.

All libraries featured in the Toolkit will arrive with carefully prepared knowledge bases featuring well-structured practical information. In addition, all libraries will be available in one place, allowing the developers to find solutions and example snippets to solve their problems quickly. Nice one!

[Read More]

Full end-to-end deployment of a machine learning algorithm into a live production environment

Categories

Tags big-data data-science devops

Older article will guide you through how to use scikit-learn, pickle, Flask, Microsoft Azure and ipywidgets to fully deploy a Python machine learning algorithm into a live, production environment. By Graham Harrison.

You will get these steps describing how machine learning algorithm could be fully deployed into a live production environment so that it could be “consumed” in a platform-agnostic way:

  • Develop a machine learning algorithm
  • Make an individual prediction from the trained model
  • Develop a web service wrapper
  • Deploy the web service to microsoft Azure
  • Add the Azure app service extension to VS Code
  • Building a client application to consume the Azure deployed web service

There are quite a few steps involved, but using readily available libraries and free tools including scikit-learn, pickle, flask, Microsoft Azure and ipywidgets we have constructed a fully working, publicly available cloud deployment of a machine learning algorithm and a fully functioning client to call and consume the web service and display the results. Nice one!

[Read More]

Faster MQTT data collection with InfluxDB

Categories

Tags database app-development web-development devops

Native MQTT eliminates the need to write custom code, orchestrate additional technology layers or incorporate additional hosting services. By By Jason Myers.

MQTT is a powerhouse within the Internet of Things (IoT) space. Its pub/sub model and lack of defined payload structure make it infinitely adaptable to the needs of modern sensors, devices and systems. IoT data is also time-series data. Time-stamped data enables businesses and applications to track real-time and historical change, and it can also contribute to forecasting and prediction.

Article then describes:

  • Configuring Native MQTT
    • Broker details: Specify the IP address, port and authentication parameters for your MQTT message broker
    • Topic name: Provide the name(s) for the topic(s) you want to subscribe to
    • Parsing rules: Set up parsing rules to map elements in your MQTT messages to the different elements of InfluxDB’s line protocol data model: measurements, timestamps, fields, and tags
  • Data parsing options

Native MQTT gives developers a way to collect time-series data in the cloud with a single step. Eliminating the need to write custom code, orchestrate additional technology layers or incorporate additional hosting services means that developers can spend more time actually using their collected data and less time configuring or managing infrastructure. Good read!

[Read More]

The ultimate guide to redirects: URL redirections explained

Categories

Tags browsers app-development web-development search

Redirects send users from one URL to another. The first URL is the one the user clicked, typed in, or otherwise requested. The second is the new destination URL. By Kelly Lyons, senior blog editor @ Semrush.

Redirections work pretty much the same way for search engines. They send search engines from one particular URL to another.

You will get thorough explanation of:

  • What are redirects?
  • Why are redirects important?
  • When to use redirects
  • Types of redirects
  • HTTP redirects
  • Meta refresh redirects
  • JavaScript redirects
  • How to implement redirects
  • 5 redirect best practices

When you set up a redirect, make sure that the new page’s content is a close match to the old page’s. For example, redirecting an expired specials page to your main specials page instead of your homepage makes a lot more sense. Google essentially skips over true 404 pages. This doesn’t happen with soft 404 errors, so it’s best to avoid them and fix any existing errors. Long and informative article!

[Read More]

Using Watir to automate web browsers with Ruby

Categories

Tags tdd performance app-development web-development browsers

Browser automation describes the process of programmatically performing certain actions in the browser (or handing these actions over to robots) that might otherwise be quite tedious or repetitive to be performed manually by a human. By Jude Ero.

Watir, pronounced as water, is a group of Ruby libraries for automated web browsers. It allows writing the tests which are easy to read and maintain. In other words, it is a simple and flexible tool.

In the artcile also:

  • What is browser automation?
  • Implementing Watir
  • Installing gems
  • Setting up Watir
  • Launching a browser
  • Finding and interacting with elements
  • Extracting data from the web page
  • Executing javascript

Watir is a family of libraries for web browser testing and automation. It is highly regarded in the Ruby community and easy to learn and use. In this tutorial, you learned how to set it up and harness its most common functionalities. Interesting!

[Read More]

Building a secure SaaS application with Amazon API Gateway and Auth0 by Okta

Categories

Tags apis serverless infosec cloud app-development web-development microservices

Most applications require a form of identity service to manage, authenticate, and authorize users. In software-as-a-service (SaaS) applications, multi-tenancy adds specific challenges to this task that are important aspects to consider when designing a multi-tenant identity management service. By Humberto Somensi.

In this post, author will dive deep into the Auth0 identity platform by describing how to leverage Auth0 Organizations to enable multi-tenant identity in SaaS solutions, and how to integrate it with Amazon API Gateway, covering:

  • Auth0 essential building blocks
  • Auth0 Organizations: Your tenants in a nutshell
  • Multi-Tenant setup with Auth0 organizations
  • Onboarding new tenants
  • Login flow
  • Securing your application with Amazon API gateway
  • Using SaaS Identity to harden your tenant isolation posture
  • Exploring More Complex Use Cases

Identity is an important and complex subject in any context. When analyzed from a multi-tenant perspective, some new challenges are imposed. Like with anything we do at Amazon, start by understanding what your customers require. Then, select the appropriate identity provider and design your application to meet your customer needs. Very informative!

[Read More]

What is green computing?

Categories

Tags serverless cio miscellaneous app-development web-development cloud

Green computing, also called sustainable computing, aims to maximize energy efficiency and minimize environmental impact in the ways computer chips, systems and software are designed and used. By Rick Merritt.

Mobile users demand maximum performance and battery life. Businesses and governments increasingly require systems that are powerful yet environmentally friendly. And cloud services must respond to global demands without making the grid stutter.

The article makes a good job explaining:

  • Why is green computing important?
  • What are the elements of green computing?
  • What’s the history of green computing?
  • A pioneer in energy efficiency
  • A green computing benchmark
  • AI and networking get more efficient
  • What’s ahead in green computing?

… and more. Green computing hit the public spotlight in 1992, when the U.S. Environmental Protection Agency launched Energy Star, a program for identifying consumer electronics that met standards in energy efficiency. In an effort to accelerate climate science, NVIDIA announced plans to build Earth-2, an AI supercomputer dedicated to predicting the impacts of climate change. It will use NVIDIA Omniverse, a 3D design collaboration and simulation platform, to build a digital twin of Earth so scientists can model climates in ultra-high resolution. Nice one!

[Read More]

The future is serverless

Categories

Tags serverless ibm app-development web-development microservices

Why serverless computing is the future of all cloud computing. Since the introduction of cloud computing, the field experienced a series of back-and-forth evolutions, partly driven by cost factors that repeated themselves in various guises. However, in recent years, a new motivating factor might help cement the next evolution of cloud computing. By Michael Maximilien, David Hadas, Angelo Danducci II, Simon Moser.

Serverless computing was created to solve the problem of allocating cloud compute resources. Serverless was built to tackle this problem by adding automation that eliminates the need for users to predetermine the amount of compute resources necessary for their workload. As an open source example, Knative added scaling automation on top of Kubernetes-based cloud platforms. Knative makes the scaling decisions of workload services in line with actual service demand. As requests come in, Knative adjusts the compute resources to the demand. Knative scales the number of service pods infinitely (assuming Kubernetes has the resources) and when requests dry out, it scales down (to zero pods eventually).

In this blog post, we make the case and paint a vision that serverless computing is the future of cloud computing. The argument centers around the following premises:

  • Cloud computing is at the center of the modern interconnected world. Most modern applications use cloud compute applications for aggregating and processing data and for constructing information that edge devices need.
  • Cloud computing demand is expected to grow annually by 15%.
  • Cloud computing is projected to reach 50% of IT spending in key market segments
  • Cloud computing already consumes 1-1.5% of global energy and its growth represents an actual threat to the environment.

In this blog post, authors explained the motivations for serverless computing to be the future of all cloud computing workloads. We argued that this serverless-first future has various potential benefits for both cloud users and providers. Nice one!

[Read More]