Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Linux surpasses 5% market share on US desktops for the first time

Categories

Tags browsers open-source linux cio analytics

The rise in Linux desktop market share is significant, representing an evolving landscape for operating system technologies. This necessitates strategic considerations around platform support, security hardening, and potential integration opportunities. Understanding the drivers of this shift (privacy, open-source ethos, hardware compatibility) informs our long-term technology roadmap and investment decisions regarding containerization, virtualization, and cloud-native architectures. The increasing reliance on Linux-based systems also influences developer toolchains and deployment strategies. By Skye Jacobs.

Drivers for growth:

  • Windows Dissatisfaction: Issues like the end-of-life for Windows 10, the cost of upgrading to Windows 11, and concerns around forced updates are pushing users towards alternatives
  • Privacy Concerns: Increasing awareness of data collection practices by major operating system vendors is driving demand for more privacy-focused options
  • Steam Deck Influence: The Steam Deck’s success demonstrates Linux’s suitability for gaming and its ability to attract a new user base
  • Usability Improvements: Distributions like Ubuntu and Linux Mint have significantly improved ease of use, lowering the barrier to entry for non-technical users

Linux’s recent rise to 5% market share in the US desktop market represents a significant shift. This isn’t just about numbers; it reflects a broader trend of users actively seeking alternatives that prioritize privacy, control, and flexibility. Several factors are contributing to this growth. Good read!

[Read More]

Accessible by design: Building inclusive digital products from the ground up

Categories

Tags browsers app-development frontend web-development ux

“Accessible by design” refers to building digital products in a way that makes accessibility a core part of the development process from the beginning, not an afterthought. Instead of waiting until the end of a project to address accessibility issues, this approach ensures every decision—from content structure and color choices to navigation patterns and heading hierarchy—is made with accessibility in mind. Tools like semantic HTML, logical reading order, readable typography, and keyboard-friendly interactions are used from day one. By Nir Horesh.

The article also explains:

  • The foundation: Understanding your structure
  • Think like a book’s table of contents
  • Visual design with purpose
  • The language of accessibility: Accessible names
  • Functionality: Making interaction intuitive
  • The professional standard: Quality across all contexts

The question isn’t whether you can afford to prioritize accessibility — it’s whether you can afford not to. In a world where digital experiences are increasingly central to how we work, learn, shop, and connect, accessible design isn’t just about doing the right thing. It’s about doing things right. The article also provides examples of best practices, along with links to companies and websites that prioritize accessibility. Good read!

[Read More]

Building a real-time video AI service with Google Gemini

Categories

Tags akka java ai app-development google

The article describes the development of a real-time video AI service for a major global service provider, leveraging Google Gemini’s Multimodal Live API and Akka’s SDK. The team successfully built, deployed, and scaled the service to handle thousands of transactions per second, far exceeding customer requirements. Key components included video ingestion, augmentation, and conversational storage, all deployed within a private Akka environment provisioned in a Google VPC in just 2 hours. By Johan Andrén.

A significant challenge was the lack of an efficient JVM client for the Gemini API, as the only available option was a blocking, synchronous Python client. The team reverse-engineered the Python client’s behavior and developed their own reactive client using Akka streams and remoting libraries in just one day. The protocol uses JSON objects over WebSocket, with an initial setup message followed by streaming of audio, video, and text data, receiving either audio or text in return.

The implementation utilized Akka HTTP’s WebSocket API and modeled the protocol using Java records. Jackson was employed for JSON serialization/deserialization, with customizations for base64 encoding and field naming. The solution demonstrates how to effectively interact with third-party WebSocket APIs without native JVM support, enabling Akka-based services to perform live video, audio, and text interactions with Google Gemini. Nice one!

[Read More]

Addressing hidden risks in AI implementation for safety

Categories

Tags ai cio infosec software learning management

AI safety discussions predominantly focus on easy to conceptualise, highly salient risks including algorithm bias, hallucinations and disinformation. While these are crucial concerns, they overlook a fundamental truth we’ve learned from other high-stakes fields like aviation and healthcare: sometimes the most dangerous risks can hide in plain sight. By Manu Savani.

The article dives into:

  • Hidden Risks: AI safety needs to address subtle risks arising from how AI is used daily, not just obvious issues like bias.
  • Human Oversight Limitations: Simply having humans oversee AI isn’t a foolproof solution; they can miss problems
  • Operational Harm: Well-intentioned AI implementation can create hidden harms (e.g., worker fatigue, unequal outcomes) that are often overlooked
  • Proactive Framework: A framework to proactively identify and address these ‘hidden’ risks is essential for safe AI deployment

AI is rapidly transforming how we work, offering significant productivity gains. However, alongside the excitement, there’s a crucial need to address “hidden” risks in AI implementation – risks that aren’t immediately obvious but can have serious consequences for our teams and the organization as a whole. This article highlights a shift from focusing solely on technical aspects like algorithmic bias to understanding how AI is used day-to-day.

The current reliance on “human-in-the-loop” oversight isn’t a foolproof solution; even skilled individuals can miss problems when working with complex AI systems. More concerningly, well-intentioned use of AI tools can inadvertently create new challenges. For example, automating routine tasks might seem efficient, but if it leads to employee burnout or deskilling, the overall impact could be negative.

The Cabinet Office has developed a practical framework for identifying these hidden risks. It focuses on six key areas – from ensuring quality assurance when using AI-powered tools to addressing potential mismatches between the task and the tool being used. They’ve even created prompts to help teams proactively identify issues before they become problems. Excellent read!

[Read More]

Make Cline enterprise-ready using an AI Gateway

Categories

Tags software ai programming web-development app-development

Cline is an AI-powered coding assistant that enhances developers’ productivity by offering advanced code suggestions and support in debugging and architectural tasks. However, when scaling Cline across an organization, challenges such as security risks, usage tracking, and compliance arise. Portkey’s AI Gateway addresses these challenges by providing enterprise-ready features like centralized access, observability, governance, and security guardrails. By Drishti Shah.

While Cline is powerful for individual use, scaling it across multiple teams introduces challenges:

  • Security Risks: Without proper safeguards, sensitive data such as API keys or customer information might be exposed
  • Usage Tracking: Organizations need clear insights into who uses Cline, how often, and the associated costs to manage resources effectively
  • Compliance Issues: Ensuring compliance with industry standards requires robust mechanisms to prevent unintended data leaks

Portkey’s AI Gateway integrates with Cline to provide enterprise-level features like security guardrails, centralized observability, governance, and model access management without altering the developer experience. Key benefits include enhanced security through data redaction, improved performance via caching, comprehensive usage analytics for cost management, and flexible model options across various providers. Good read!

[Read More]

Test layers from unit to system

Categories

Tags tdd web-development app-development software

The article explores the importance of layered software testing, from unit to system tests, to build confidence and prevent systemic failures. It compares different testing strategies like the Pyramid and Trophy, arguing for a balanced approach tailored to project needs. By Jim Humelsine.

Key learnings:

  • Layered testing is crucial
  • Understanding test scopes
  • Testing strategies offer different trade-offs
  • Confidence, not perfection
  • Testing has limits

Using the Mars Climate Orbiter failure as a cautionary tale, this article emphasizes the critical need for a layered testing strategy to prevent systemic failures. It explains that different test layers—Unit, Integration/Acceptance, and System—operate at varying scopes, much like how a building inspector, utility manager, and city planner have different perspectives on a city. Unit tests provide deep confidence in individual components, while integration and system tests ensure these components cooperate correctly and the application meets user expectations as a whole.

The text explores various testing strategies, including the traditional “Ice Cream Cone” (heavy on manual testing), the “Pyramid” (a strong foundation of unit tests), and the “Trophy” (a focus on user-centric acceptance tests). It argues that no single approach is universally best; instead, a blended strategy tailored to the project’s context is most effective. The ultimate goal is not just finding bugs but building confidence to refactor and release frequently, creating a robust and cohesive product. Good read!

[Read More]

DIY Docker volume drivers: What's missing

Categories

Tags cloud docker app-development software

This post explores the limitations of the current Docker volume plugin ecosystem, emphasizing the difficulty in finding unprivileged solutions. The author details their journey in creating a custom volume plugin as a way to address this limitation. By Adam Faris.

The article focus is on:

  • The lack of readily available unprivileged Docker volume plugins presents a challenge for many use cases, particularly those prioritizing security
  • Building a custom plugin requires navigating complex build processes and leveraging specific tools like the Go plugin SDK.
  • The author’s project provides a functional example of an unprivileged volume plugin that can perform basic file operations, demonstrating a viable approach to data persistence.
  • This work underscores the need for more lightweight and flexible solutions within the Docker volume plugin ecosystem and offers valuable insights for developers interested in contributing to this area.

The author provides a comprehensive overview of the steps involved, including creating a root filesystem, building a Docker image, and enabling the custom plugin. This work offers a practical insight into developing lightweight Docker volume plugins and highlights potential areas for future exploration in this domain. Good read!

[Read More]

Docker's best-kept secret: How observability saves developers' sanity

Categories

Tags cloud docker devops how-to

Observability is crucial for managing the increasing complexity of modern distributed software systems, especially those built with Docker containers and microservices. Traditional monitoring often falls short, leading to slow troubleshooting and increased Mean Time To Resolution (MTTR). End-to-end observability, particularly through distributed tracing, provides deep insights into system behavior, enabling proactive detection of performance issues and improved reliability. By Aditya Gupta.

Main sections in the article:

  • Observability vs. Monitoring
  • Challenges in Distributed Systems
  • Distributed Tracing
  • OpenTelemetry
  • Integration benefits
  • Advanced techniques
  • CI/CD integration
  • Future trends

The article highlights OpenTelemetry and Jaeger as key tools for achieving this. OpenTelemetry is an open standard for instrumenting applications and collecting telemetry data, while Jaeger is an open-source distributed tracing system that visualizes and analyzes this data. Their integration allows developers to pinpoint bottlenecks and issues that are often obscured by the transient nature of containers and asynchronous microservice communication.

Implementing observability involves instrumenting applications with OpenTelemetry, containerizing them with Docker, and deploying them with Jaeger using tools like Docker Compose. This approach transforms debugging from guesswork to a timeline-driven analysis, significantly reducing incident response times. Major tech firms already leverage these tools to enhance performance, user experience, and system reliability.

[Read More]

How to run GUI-based applications in Docker

Categories

Tags programming cloud docker devops iot how-to

Docker is commonly used for server-side and command-line apps. However, with the right setup, you can also run GUI-based applications inside containers. These containers can include GUI libraries and display tools, which enable apps to run in a secure and isolated environment. Docker containers can run GUI applications by configuring display sharing with the host system. This approach packages applications with their dependencies while maintaining isolation, enabling consistent cross-platform deployment without cluttering the host system. By Anees Asghar.

The article further explains:

  • Isolated environments prevent system conflicts
  • Consistent behavior across different machines
  • Lightweight alternative to virtual machines
  • Easy testing and debugging capabilities
  • Cross-platform Linux GUI support

Running GUI-based applications in Docker is a great way to extend what containers can do beyond the command line. With the right setup, you can launch desktop apps from a container as if they were installed on your system. It’s a simple yet powerful approach for testing, development, or exploring Linux tools in a clean environment. Good read!

[Read More]

Multimodal AI for IoT devices requires a new class of MCU

Categories

Tags programming cloud ai infosec servers iot how-to

Context-aware computing enables ultra-low-power operation while maintaining high-performance AI capabilities when needed. The rise of AI-driven IoT devices is pushing the limits of today’s microcontroller unit (MCU) landscape. While AI-powered perception applications—such as voice, facial recognition, object detection, and gesture control—are becoming essential in everything from smart home devices to industrial automation, the hardware available to support them is not keeping pace. By Todd Dust.

You will learn about:

  • Traditional MCUs are Inadequate: The existing landscape of 32-bit MCUs cannot efficiently handle the computational and power requirements of modern AI-driven IoT applications.
  • The Need for Energy Efficiency: Many current AI MCUs are not optimized for the ultra-low-power, always-on nature of IoT devices, leading to poor battery life and performance trade-offs.
  • Multi-Gear Architecture is the Solution: A tiered architecture that dynamically shifts between ultra-low-power, efficiency, and high-performance compute domains is key to balancing power consumption and AI processing needs.
  • Context-Aware Computing: The new approach enables devices to use only the necessary compute power for a given task, from simple environmental monitoring to complex AI inferencing, dramatically improving energy efficiency.
  • Standardization is Crucial: Supporting common platforms like FreeRTOS and Zephyr helps standardize development, making it easier for designers to adopt these advanced MCUs in a rapidly evolving IoT space.

The rise of AI in IoT devices has exposed the limitations of traditional MCUs, which struggle with the performance and power demands of modern workloads. Current AI-ready hardware is often inflexible, proprietary, or repurposed from other domains, resulting in poor energy efficiency for always-on, battery-powered devices. This creates a significant gap in the market for a new class of processors.

To address this, a new multi-tiered MCU architecture offers a more intelligent solution. It uses a “multi-gear” approach with three distinct domains: an ultra-low-power “always-on” tier for constant monitoring, an “efficiency” tier for basic AI tasks, and a “performance” tier for demanding computations. This design dynamically allocates the right amount of power, ensuring high performance when needed while drastically conserving energy during idle or low-intensity periods. This context-aware computing represents a major step forward for creating scalable and efficient AI-enabled IoT devices. Nice one!

[Read More]