Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

How to create your own Google Chrome extension

Categories

Tags browsers javascript web-development app-development

If you are a Google Chrome user, you’ve probably used some extensions in the browser. Have you ever wondered how to build one yourself? In this article, I will show you how you can create a Chrome extension from scratch. By Sampurna Chapagain.

The article then will help you understand the following:

  • What is a Chrome Extension?
  • What will our Chrome Extension Look Like?
  • How To Create a Chrome Extension
  • Creating a manifest.json file

A chrome extension is a program that is installed in the Chrome browser that enhances the functionality of the browser. You can build one easily using web technologies like HTML, CSS, and JavaScript. As we discussed earlier, building a Chrome extension is similar to building any web application. The only difference is that the Chrome extension requires a manifest.json file where we keep all the configurations. Good read!

[Read More]

The file system access API with Origin Private File System

Categories

Tags browsers javascript web-development cio

It is very common for an application to interact with local files. For exampe, a general workflow is opening a file, making some changes, and saving the file. By Sihui Liu.

For web apps, this might be hard to implement. It is possible to simulate the file operations using IndexedDB API, an HTML input element with the file type, an HTML anchor element with the download attribute, etc, but that would require a good understanding of these standards and careful design for a good user experience. Also, the performance may not be satisfactory for frequent operations and large files.

The article then describes:

  • Origin Private File System
  • Persistence
  • Browser Support
  • API
  • Examples

The API is currently unavailable for Safari windows in Private Browsing mode. For where is it available, its storage lifetime is the same as other persistent storage types like IndexedDB and LocalStorage. The storage policy will conform to the Storage Standard. Safari users can view and delete file system storage for a site via Preferences on macOS or Settings on iOS. Nice one!

[Read More]

OPC UA, MQTT, and Apache Kafka - The Trinity of data streaming in IoT

Categories

Tags queues messaging cloud analytics

In the IoT world, MQTT and OPC UA have established themselves as open and platform-independent standards for data exchange in Industrial IoT and Industry 4.0 use cases. Data Streaming with Apache Kafka is the data hub for integrating and processing massive volumes of data at any scale in real-time. By Kai Waehner.

Machine data must be transformed and made available across the enterprise as soon as it is generated to extract the most value from the data. As a result, operations can avoid critical failures and increase the effectiveness of their overall plant.

Decision tree for evaluating IoT protocols

Source: https://www.kai-waehner.de/blog/2022/02/11/opc-ua-mqtt-apache-kafka-the-trinity-of-data-streaming-in-industrial-iot/

The article then describes:

  • Kappa architecture for a real-time IoT data hub
  • When to use Kafka vs. MQTT and OPC UA?
  • Meeting the challenges of Industry 4.0 through data streaming and data mesh
  • Separation of concerns in the OT/IT world with domain-driven design and true decoupling
  • How to choose between OPC UA and MQTT with Kafka?
  • Decision tree for evaluating IoT protocols
  • Integration between MQTT / OPC UA and Kafka
  • BMW case study: Manufacturing 4.0 with smart factory and cloud

… and much more. An event-driven data streaming platform is elastic and highly available. It represents an opportunity to increase production facilities’ overall asset effectiveness significantly. With the help of data processing and integration capabilities, data streaming complements machine connectivity via MQTT, OPC UA, HTTP, among others. This allows streams of sensor data to be transported throughout the plant and to the cloud in near real-time. Nice one!

[Read More]

Streaming analytics with Apache Pulsar and Spark structured streaming

Categories

Tags queues messaging big-data apache cio cloud analytics

Apache Pulsar, a promising new toolkit for distributed messaging and streaming. In this piece we combine two of our favorite pieces of tech: Apache Pulsar and Apache Spark. By Daniel Ciocîrlan.

Apache Pulsar excels at storing event streams and performing lightweight stream computing tasks. It’s a great fit for long term storage of data and can also be used to store results to some downstream applications.

Stream processing is an important requirement in modern data infrastructures. Companies now aim to leverage the power of streaming and real-time analytics in order to provide results faster to their users in order to enhance the user experience and drive business value. Typically, streaming data pipelines require a streaming storage layer like Apache Pulsar or Apache Kafka, and then in order to perform more sophisticated stream processing tasks we need a stream compute engine like Apache Flink or Spark Structured Streaming.

The article main points are:

  • The role of Apache Pulsar in streaming data pipelines
  • Example use case: Real-time user engagement
  • Using the Apache Pulsar/Spark Connector

In this article we discussed the role of Apache Pulsar as a backbone of a modern data infrastructure, the streaming use cases Pulsar can support, and how you can use it along with Spark Structured Streaming to implement some more advanced stream processing use cases by leveraging the Pulsar Spark Connector. We also reviewed a real world use case, demonstrated a sample streaming data pipeline, and examined the role of Apache Pulsar and Spark Structured Streaming within the pipeline. Good read!

[Read More]

Right hybrid cloud strategy enables agility at scale

Categories

Tags big-data agile cio cloud ibm

In today’s world, there’s a common thread connecting almost every organization, of every size, across all industries and regions: uncertainty. Change—often disruptive—is happening faster. or the organizations trying to navigate it, the need for business agility—the ability to adapt rapidly and effectively—has never been more important. By @IBM.

The article then dives right in:

  • Need for agility and threat of complexity
  • Why hybrid’s time is now
  • How IBM’s open hybrid cloud strategy stands apart
  • Unlocking value through hybrid cloud
  • Why hybrid cloud matters to you
  • Open hybrid cloud solutions in action

Maybe you’ve already recognized the looming challenges - in terms of orchestration, inflexibility and security—and you’ve taken the first steps toward either doing it yourself (DIY) or going with a provider. Perhaps the most compelling reason to resist the DIY path is the sheer amount of resources an enterprise needs to commit to building and sustaining a homegrown hybrid cloud platform. Talent—in the form of engineers experienced in open-source development—is the main gating factor. Nice one!

[Read More]

Six steps for leading successful data science teams

Categories

Tags big-data analytics cio data-science

An increasing number of organizations are bringing data scientists on board as executives and managers recognize the potential of data science and artificial intelligence to boost performance. But hiring talented data scientists is one thing; harnessing their capabilities for the benefit of the organization is another. By Rama Ramakrishnan.

The article main parts:

  • Point data science teams toward the right problem
  • Decide on a clear evaluation metric up front
  • Create a common-sense baseline first
  • Manage data science projects more like research than like engineering
  • Check for truth and consequences
  • Log everything, and retrain periodically

It is important to subject results to intense scrutiny to make sure the benefits are real and there are no unintended negative consequences. The most basic check is making sure the results are calculated on data that was not used to build the models. Data science models, like software in general, tend to require a great deal of future effort because of the need for maintenance and upgrades. They have an additional layer of effort and complexity because of their extraordinary dependence on data and the resulting need for retraining. Nice one!

[Read More]

Exploring Windows UAC bypasses: Techniques and detection strategies

Categories

Tags cio infosec miscellaneous analytics

Malware often requires full administrative privileges on a machine to perform more impactful actions such as adding an antivirus exclusion, encrypting secured files, or injecting code into interesting system processes. By @sbousseaden.

Even if the targeted user has administrative privileges, the prevalence of User Account Control (UAC) means that the malicious application will often default to Medium Integrity, preventing write access to resources with higher integrity levels. To bypass this restriction, an attacker will need a way to elevate integrity level silently and with no user interaction (no UAC prompt). This technique is known as a User Account Control bypass and relies on a variety of primitives and conditions, the majority of which are based on piggybacking elevated Windows features.

The article then makes a good job explaining:

  • UAC Bypass methods
  • Registry Key manipulation
  • DLL hijack
  • Elevated COM interface
  • Token security attributes
  • Most common UAC bypasses

Designing detections by focusing on key building blocks of an offensive technique is much more cost-effective than trying to cover the endless variety of implementations and potential evasion tunings. In this post, we covered the main methods used for UAC bypass and how to detect them as well as how enriching process execution events with token security attributes enabled us to create a broader detection logic that may match unknown bypasses. In the article you will also find links to further reading. Good one!

[Read More]

Top concerns for operating cloud-native technologies

Categories

Tags cio cloud miscellaneous management analytics

Platform9 announced the results of its research, revealing that 91% of survey respondents cite security, consistent management across environments, high availability, and observability as their top concerns for operating cloud-native technologies. By @helpnetsecurity.

The research also found that despite fast growing public-cloud deployments, 67% of cloud deployments are distributed, spread out across on-premises, hybrid, and edge clouds.

The state of cloud-native technologies adoption based on the results:

  • Kubernetes dominates container management: App containerization is accelerating, with 53% of respondents planning to containerize their current applications. Nearly 85% of respondents are using Kubernetes or have plans to deploy it in the next six months.
  • Cloud-native hiring continues to be a priority: DevOps, cloud platform engineering, cloud-native developers, and security are the top hiring investments for 2022.
  • Executives across the board are looking for practical solutions to reduce vendor lock-in: While 61% of respondents have high or moderate concern about vendor lock-in, 71% of advanced users with larger deployments are even more concerned than early users. Additionally, managers, executives, and architects show higher level of concerns than engineers (65%). Plans for multiple cloud deployment lead as the number one action to address cloud lock-in, followed by using open-source services (#2) and writing portable apps (#3).
  • While security and operations concerned 91% of respondents, executives were more concerned about cost optimization, data management, and high availability while practitioner’s challenges were more around day-2 operations such as upgrades, consistent management, observability, and troubleshooting.

The report, which surveyed over 500 technology executives and practitioners, details how enterprises are adopting cloud-native technologies, provides insight into 2022 technology investment priorities, and identifies top concerns to help business leaders and enterprises determine how best to navigate and accelerate their cloud-native initiatives for the rest of the year. Nice one!

[Read More]

DevSecOps: Why you should care and how to get started

Categories

Tags devops cloud app-development infosec

The increasing popularity of DevOps software development methodologies has led to shorter and more agile life cycles, in which software is released and deployed in minutes or hours rather than the days, weeks, or even months required under traditional practices. However, many development teams still experience delays in getting releases into production due to the security considerations that are traditionally brought to bear at the end of the life cycle. To address this, organizations are more and more frequently adopting a DevSecOps approach. By Katrina Novakovic, Chris Jenkins.

The article then reads about:

  • What is DevSecOps?
  • Why should developers care about DevSecOps?
  • How can you get started with DevSecOps?
  • How can DevSecOps help with regulatory compliance?
  • DevSecOps: Security + agility

DevSecOps is all about automating and integrating security within all phases of the software development life cycle to produce more secure code more quickly and easily. Getting started requires that you change your mindset and organizational culture to collaborate and share responsibility for producing secure and compliant code, using tools and processes to implement security checks into CI/CD pipelines, and implementing automated security compliance audits and controls to comply with regulations. There is much more to DevSecOps, and you can explore it further as you build upon the foundation of these initial recommendations. Good read!

[Read More]

Distributed tracing with Istio, Quarkus and Jaeger

Categories

Tags devops kubernetes monitoring cloud apis microservices

In this article, you will learn how to configure distributed tracing for your service mesh with Istio and Quarkus. For test purposes, we will build and run Quarkus microservices on Kubernetes. The communication between them is going to be managed by Istio. Istio service mesh uses Jaeger as a distributed tracing system. By Piotr Minkowski.

Istio generates distributed trace spans for each managed service. It means that every request sent inside the Istio will have the following HTTP headers:

Istio distributed trace spans - headers

Source: https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/

The article then provides good explanation of the following:

  • Service mesh architecture
  • Distributed tracing with Istio
  • Create microservices with Quarkus
  • Run Quarkus applications on Kubernetes
  • Traffic management with Istio
  • Testing Istio tracing with Quarkus

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone author’s GitHub repository. Excellent!

[Read More]