Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Smart prefetching with TanStack Query for instant UX

Categories

Tags javascript web-development app-development react ux

Prefetching data is one of the most powerful techniques in React Query. It helps you improve perceived performance by loading data before the user needs it — resulting in near-instant navigation. Enhance your React app’s performance by leveraging TanStack Query’s prefetching capabilities, enabling instant data retrieval for improved user experience and seamless navigation. By jsdev.space.

The article main focus is on:

  • Prefetching is a powerful technique to improve perceived performance in React apps.
  • prefetchQuery can be triggered on hover, page load, or other events.
  • Always await prefetchQuery to handle potential errors effectively.
  • ensureQueryData is useful for preloading layout-level data (e.g., user authentication).

The article emphasizes the importance of error handling with prefetchQuery and recommends awaiting its execution to catch and address any issues that may arise. Integrating prefetchQuery with useQuery ensures seamless data retrieval without loading spinners. The guide also introduces ensureQueryData, a feature available in TanStack Query 5+, which is particularly well-suited for preloading layout-level data, such as user authentication status or application settings. Nice one!

[Read More]

Modern Node.js patterns for 2025

Categories

Tags javascript web-development app-development nodejs javascript

Node.js has undergone a remarkable transformation since its early days. If you’ve been writing Node.js for several years, you’ve likely witnessed this evolution firsthand—from the callback-heavy, CommonJS-dominated landscape to today’s clean, standards-based development experience. By kashw1n.com.

Modern Node.js (2025) embraces web standards and built-in tools to streamline development, enhance security, and improve performance.

The key points in the article:

  • Module system: ESM is the new standard.
  • Web APIs: Native Fetch API supports timeouts and cancellations; AbortController handles asynchronous task cancellation consistently across APIs like fetch and file reads.
  • Built-in Testing: Node’s native test runner (node –test) replaces external frameworks, offering watch mode and coverage reporting out of the box.
  • Async Patterns: Async/await with error handling and Promise.all() balance performance (parallel operations) and reliability (structured logging).
  • Async iterators manage event streams efficiently.
  • Streams & Workers: Modern stream APIs (pipeline, Web Streams) improve interoperability, while worker threads leverage CPU cores for backgroud tasks without blocking the main loop.

Fewer external dependencies (e.g., axios replaced by Fetch), cleaner syntax (no IIFEs), improved error handling, and seamless integration with web ecosystems like browsers and edge runtimes. These changes reduce complexity, enhance security, and accelerate development cycles. Good read!

[Read More]

Using Redis with FastAPI

Categories

Tags database web-development app-development nosql python

FastAPI is a Python web framework based on the Starlette microframework. With deep support for asyncio, FastAPI is indeed very fast. FastAPI also distinguishes itself with features like automatic OpenAPI (OAS) documentation for your API, easy-to-use data validation tools, and more. By Andrew Brookins.

In this tutorial, we’ll walk through the steps necessary to use Redis with FastAPI. We’re going to build IsBitcoinLit, an API that stores Bitcoin sentiment and price averages in Redis Stack using a timeseries data structure, then rolls these averages up for the last three hours.

The learning objectives of this tutorial:

  • Learn how to install aioredis-py and connect to Redis
  • Learn how to integrate aioredis-py with FastAPI
  • Learn how to use Redis to store and query timeseries data
  • Learn how to use Redis as a cache with aioredis-py

Putting all the pieces together, we now have a FastAPI app that can retrieve Bitcoin price and sentiment averages, store the averages in Redis, cache three-hour summary data in Redis, and serve the data to clients. You will also get video and all code samples use in tutorial. Nice one!

[Read More]

What is Azure Local and why should SysAdmins care?

Categories

Tags devops web-development app-development azure

Azure Local is not just a rebrand. It’s a message. One that says Microsoft finally gets how confusing their cloud naming has been. Microsoft’s hybrid story has been a bit confusing over the years, but now we have some clarity. By Andy Syrewicze.

Azure Local is Microsoft’s CLI tool for running Azure services locally via Docker, enabling offline development and testing without cloud costs. It emulates environments for apps using Azure Functions, Container Apps, and more, streamlining debugging for developers.

Some key features:

  • Local emulation of services like Functions, Cosmos DB, and Service Bus in containers.
  • Cross-platform (Windows, macOS, Linux) with VS Code and Azure CLI integration.
  • Offline use post-setup; reduces dependency on full Azure subscriptions.

Azure Local Development is useful for several reasons. Firstly, it allows developers to speed up development and testing by reducing the time spent waiting for cloud resources to spin up. Additionally, it enables cost-effective development by eliminating the need to pay for Azure resources during local testing. Furthermore, Azure Local Development provides a consistent development environment, ensuring that applications behave consistently across local and cloud environments. Overall, Azure Local Development streamlines the development process, making it an essential tool for Azure developers. Nice one!

[Read More]

Building dynamic user journeys: Your guide to Next.js onboarding with OnboardJS

Categories

Tags react web-development app-development cio

Next.js onboarding flows often become complex due to manual state management, conditional logic, and persistence across sessions. Managing step logic, dynamic navigation, persistence across sessions, and analytics can turn your elegant codebase into a complex web of conditional rendering and state management. By onboardjs.com.

OnboardJS solves a critical pain point for Next.js teams: building maintainable, scalable onboarding flows without compromising the framework’s architecture. Unlike monolithic solutions, it’s a headless engine (from @onboardjs/core) that handles the orchestration layer—step transitions, state persistence, and conditional logic—while keeping your UI components clean and framework-agnostic.

Why managers should care:

  • Time-to-market: Teams spend 30%+ of onboarding effort on boilerplate logic. OnboardJS cuts this by 50%, critical when launching features.
  • Risk reduction: Prevents onboarding bottlenecks (e.g., users stuck in step 2 due to broken state). The condition property lets you skip steps based on real user data (e.g., “skip if email already exists”).
  • Seamless integration: Plugins for PostHog (analytics) and Supabase (user data) auto-sync without adding custom code. Example: When users complete onboarding, OnboardJS immediately triggers a Supabase insert via @onboardjs/supabase-plugin.
  • No framework conflicts: Works with Next.js App Router (unlike legacy libraries that force useEffect hacks). The OnboardingProvider is a pure Client Component—no SSR issues.

OnboardJS turns onboarding from a “user experience chore” into a strategic asset—reducing churn, accelerating adoption, and letting your team ship features faster. For managers: It’s the difference between delayed launches and user retention without adding technical debt. Good read!

[Read More]

Rate limiting for Django websites

Categories

Tags nginx devops infosec devops kubernetes

Rate limiting restricts the number of requests a client can make to your Django website within a specific timeframe. It’s especially useful for blocking malicious bots, crawlers, or brute-force attacks that overwhelm server resources. By Aidas Bendoraitis.

Nginx, often used as a reverse proxy in front of Django applications, provides robust rate-limiting capabilities. Nginx allows you to define rate limit zones, which specify limits on requests based on client IP addresses or other criteria. For example:

  • $binary_remote_addr limits requests per client IP.
  • $server_name applies limits globally across all domains.

You can also configure burst and nodelay settings:

  • burst: Allows a short spike of requests beyond the main limit (e.g., 2 extra requests).
  • nodelay: Processes requests without delays when within burst limits.

If too many requests come in, Nginx returns a 429 error. For instance, limiting list views to 1 request per second with a burst of 2 would block any third request within that timeframe. By implementing rate limiting in Nginx, you can safeguard your Django website from malicious traffic while ensuring smooth performance for real users. Good read!

[Read More]

Securing Kubernetes resources without a VPN

Categories

Tags nginx app-development infosec devops kubernetes

Securing kubernetes resources that you want to expose to only some users externally is often done through IP allowlisting and a VPN. While this is a tried and true method, there are some drawbacks. By Brian Sizemore.

Instead of manual VPN setup, this approach uses OAuth2 Proxy (an open-source tool) paired with Ingress-Nginx (a Kubernetes ingress controller). OAuth2 Proxy authenticates users via their company’s existing identity provider (e.g., Google Workspace, Microsoft) and controls access to resources before they reach Kubernetes. Ingress-Nginx acts as a reverse proxy, redirecting unauthenticated users to the IDP’s login page instead of blocking calls outright.

Here’s how it works:

  • Authentication: When a user tries to access a Kubernetes resource, Ingress-Nginx (the gateway to your cluster) redirects them to OAuth2 Proxy.
  • Validation: OAuth2 Proxy checks if the user is logged into their company’s identity provider (e.g., Google Workspace, Microsoft) using OAuth 2.0 or OpenID Connect.
  • Access Control: If authenticated, the user is granted access. If not, they’re sent to their company’s login page.

By using OAuth2 Proxy, you can simplify access control to internal applications and eliminate the need for a VPN. This approach leverages existing company login credentials and enables fine-grained access control using groups. Good read!

[Read More]

Microsoft entrusts DocumentDB to Linux Foundation

Categories

Tags nosql database cio azure

Microsoft has announced that DocumentDB, their distributed NoSQL database built on PostgreSQL, is joining the Linux Foundation. This represents a significant shift from Microsoft’s traditional approach to database development. DocumentDB was initially created within Microsoft to handle document-oriented workloads (think JSON data) at scale, prioritizing high availability and flexibility. By Bobby Borisov.

Previously, DocumentDB’s roadmap and feature set were dictated by Microsoft’s internal priorities. Now, the Linux Foundation will establish a technical steering committee and working groups. These groups will be composed of representatives from various organizations – including Microsoft, other database vendors, cloud providers (AWS, Azure, Google Cloud), and independent developers. This collaborative approach aims to ensure that DocumentDB evolves in a way that benefits the broader community.

While this move is generally positive, there are potential considerations:

  • Microsoft’s Role: Microsoft will still be involved, but they won’t have sole control over the project’s direction.
  • Community Governance Challenges: Open-source projects can sometimes face challenges in reaching consensus on features and priorities.
  • Potential for Fragmentation: While unlikely given PostgreSQL’s foundation, there’s always a risk of forks or diverging development paths within an open-source project.

Ultimately, Microsoft believes this transition will foster wider adoption, improve stability, and create a more vibrant ecosystem around DocumentDB. It demonstrates a commitment to supporting open standards and community-driven innovation in the database space. Nice one!

[Read More]

MCP + SQL: The secret weapon to connect AI to enterprise systems

Categories

Tags sql database ai bots devops

This article addresses the common challenge of integrating AI with existing enterprise systems like Salesforce and SAP. The core concept is leveraging Large Language Models’ (LLMs) proficiency in SQL alongside a standardized communication protocol called the Model Context Protocol (MCP). By Manish Patel.

Essentially, it’s about leveraging LLMs’ surprising ability to understand and generate SQL queries. Instead of building bespoke integrations for each system, this approach treats all enterprise data as accessible through SQL.

Here’s how it works in practice:

  • SQL Connectors: CData connectors act as “universal adapters,” exposing various business systems (Salesforce, SAP, etc.) as SQL databases. For example, a Salesforce connector translates API calls into SQL tables.
  • MCP Bridge: MCP provides a secure channel for LLMs to send and receive these SQL queries. It ensures that every query runs with the user’s credentials, maintaining data security.
  • AI Action: The AI generates a SQL query (e.g., “Find accounts inactive for 90 days”), MCP routes it securely, retrieves results, and allows the AI to act on the information – updating opportunity stages in Salesforce, creating tasks, or generating reports.

Article presents a compelling solution to a common problem: getting AI to actually work with your company’s data. Traditionally, connecting AI to systems like Salesforce or SAP has been incredibly complex and expensive due to the need for custom integrations. This approach uses a clever combination of existing technologies – LLMs’ ability to understand SQL and a secure communication protocol (MCP) – to streamline this process. The key takeaway is that you can significantly reduce development time, improve security, and unlock new automation opportunities by adopting this strategy. Start small with read-only access and gradually expand capabilities as trust builds. Nice one!

[Read More]

Interoperability in 2025: Beyond the Erlang VM

Categories

Tags erlang elixir app-development web-development

The Erlang Virtual Machine has, historically, provided three main options for interoperability with other languages and ecosystems, with different degrees of isolation. By Wojtek Mach.

he article discusses several ways Elixir can interact with other programming languages and environments beyond the Erlang VM, emphasizing new advancements in interoperability. It highlights that traditional methods (NIFs, Ports, Distributed Nodes) each have trade-offs, and a shift towards portability opens new possibilities.

The article focuses on new emerging paradigm: portability. This involves running Elixir code in other environments, targeting their native capabilities.

Two key projects driving this are:

  • AtomVM: A lightweight Erlang VM implementation, designed for resource-constrained environments like microcontrollers (ESP32, STM32). This opens doors for embedding Elixir into IoT devices. AtomVM also targets WebAssembly (WASM).
  • Popcorn: A library leveraging WASM to run Elixir code directly in web browsers. This allows for interactive Elixir applications within a browser and JS interoperability, as demonstrated with a simple example of updating the browser’s content.

Increased flexibility to leverage existing libraries from other languages. The ability to run Elixir code in new environments like embedded systems or client-side web applications. Simplified development of full-stack web applications utilizing a single language (Elixir) throughout the stack. Improved performance through portability solutions targeting specific hardware or runtimes. Good read!

[Read More]