Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Microsoft entrusts DocumentDB to Linux Foundation

Categories

Tags nosql database cio azure

Microsoft has announced that DocumentDB, their distributed NoSQL database built on PostgreSQL, is joining the Linux Foundation. This represents a significant shift from Microsoft’s traditional approach to database development. DocumentDB was initially created within Microsoft to handle document-oriented workloads (think JSON data) at scale, prioritizing high availability and flexibility. By Bobby Borisov.

Previously, DocumentDB’s roadmap and feature set were dictated by Microsoft’s internal priorities. Now, the Linux Foundation will establish a technical steering committee and working groups. These groups will be composed of representatives from various organizations – including Microsoft, other database vendors, cloud providers (AWS, Azure, Google Cloud), and independent developers. This collaborative approach aims to ensure that DocumentDB evolves in a way that benefits the broader community.

While this move is generally positive, there are potential considerations:

  • Microsoft’s Role: Microsoft will still be involved, but they won’t have sole control over the project’s direction.
  • Community Governance Challenges: Open-source projects can sometimes face challenges in reaching consensus on features and priorities.
  • Potential for Fragmentation: While unlikely given PostgreSQL’s foundation, there’s always a risk of forks or diverging development paths within an open-source project.

Ultimately, Microsoft believes this transition will foster wider adoption, improve stability, and create a more vibrant ecosystem around DocumentDB. It demonstrates a commitment to supporting open standards and community-driven innovation in the database space. Nice one!

[Read More]

MCP + SQL: The secret weapon to connect AI to enterprise systems

Categories

Tags sql database ai bots devops

This article addresses the common challenge of integrating AI with existing enterprise systems like Salesforce and SAP. The core concept is leveraging Large Language Models’ (LLMs) proficiency in SQL alongside a standardized communication protocol called the Model Context Protocol (MCP). By Manish Patel.

Essentially, it’s about leveraging LLMs’ surprising ability to understand and generate SQL queries. Instead of building bespoke integrations for each system, this approach treats all enterprise data as accessible through SQL.

Here’s how it works in practice:

  • SQL Connectors: CData connectors act as “universal adapters,” exposing various business systems (Salesforce, SAP, etc.) as SQL databases. For example, a Salesforce connector translates API calls into SQL tables.
  • MCP Bridge: MCP provides a secure channel for LLMs to send and receive these SQL queries. It ensures that every query runs with the user’s credentials, maintaining data security.
  • AI Action: The AI generates a SQL query (e.g., “Find accounts inactive for 90 days”), MCP routes it securely, retrieves results, and allows the AI to act on the information – updating opportunity stages in Salesforce, creating tasks, or generating reports.

Article presents a compelling solution to a common problem: getting AI to actually work with your company’s data. Traditionally, connecting AI to systems like Salesforce or SAP has been incredibly complex and expensive due to the need for custom integrations. This approach uses a clever combination of existing technologies – LLMs’ ability to understand SQL and a secure communication protocol (MCP) – to streamline this process. The key takeaway is that you can significantly reduce development time, improve security, and unlock new automation opportunities by adopting this strategy. Start small with read-only access and gradually expand capabilities as trust builds. Nice one!

[Read More]

Interoperability in 2025: Beyond the Erlang VM

Categories

Tags erlang elixir app-development web-development

The Erlang Virtual Machine has, historically, provided three main options for interoperability with other languages and ecosystems, with different degrees of isolation. By Wojtek Mach.

he article discusses several ways Elixir can interact with other programming languages and environments beyond the Erlang VM, emphasizing new advancements in interoperability. It highlights that traditional methods (NIFs, Ports, Distributed Nodes) each have trade-offs, and a shift towards portability opens new possibilities.

The article focuses on new emerging paradigm: portability. This involves running Elixir code in other environments, targeting their native capabilities.

Two key projects driving this are:

  • AtomVM: A lightweight Erlang VM implementation, designed for resource-constrained environments like microcontrollers (ESP32, STM32). This opens doors for embedding Elixir into IoT devices. AtomVM also targets WebAssembly (WASM).
  • Popcorn: A library leveraging WASM to run Elixir code directly in web browsers. This allows for interactive Elixir applications within a browser and JS interoperability, as demonstrated with a simple example of updating the browser’s content.

Increased flexibility to leverage existing libraries from other languages. The ability to run Elixir code in new environments like embedded systems or client-side web applications. Simplified development of full-stack web applications utilizing a single language (Elixir) throughout the stack. Improved performance through portability solutions targeting specific hardware or runtimes. Good read!

[Read More]

PHP 8.5 adds pipe operator: What it means

Categories

Tags php cloud software-architecture app-development web-development

PHP 8.5, due out November of this year, will bring with it another long-sought-after feature: the pipe operator (|>). It’s a small feature with huge potential, yet it still took years to happen. By Larry Garfield.

On its own, that is not all that interesting. Where it becomes interesting is when it is repeated, or chained, to form a “pipeline.” For example, here’s real code example:

$arr = [
  new Widget(tags: ['a', 'b', 'c']),
  new Widget(tags: ['c', 'd', 'e']),
  new Widget(tags: ['x', 'y', 'a']),
];

$result = $arr
    |> fn($x) => array_column($x, 'tags') // Gets an array of arrays
    |> fn($x) => array_merge(...$x)       // Flatten into one big array
    |> array_unique(...)                  // Remove duplicates
    |> array_values(...)                  // Reindex the array.
;

// $result is ['a', 'b', 'c', 'd', 'e', 'x', 'y']

The article further explains:

  • What is a pipe operator?
  • Where did it come from?
  • More than the sum of its parts
  • What comes next?

The first is a second attempt at Partial Function Application. This is a larger feature, but with first-class callables already bringing in much of the necessary plumbing, which simplifies the implementation. With pipes now providing a natural use case, as well as easy optimization points, it’s worth a second attempt. Whether it makes it into PHP 8.5, is delayed to 8.6, or is again rejected is still an open question as of this writing, though I am hopeful. Major thanks to Arnaud Le Blanc from the PHP Foundation team for picking it up to update the implementation. Interesting read!

[Read More]

Four hot startups aim to keep AI data centers cool

Categories

Tags startups cloud cio performance

Data centers consume up to one-third of their energy for cooling, driving demand for innovative solutions. Direct-to-chip cooling, a emerging technology, uses cold plates to target heat sources directly on CPUs/GPUs, improving efficiency and sustainability. Traditional cooling relies on large chillers and water systems, while direct-to-chip cooling attaches cold plates to processors. These plates use liquid or advanced materials to absorb heat locally, reducing reliance on centralized cooling and lowering energy use. By Heather Clancy.

Yet, the market for data center cooling technologies is poised to double over the next seven years, reaching a projected $42.5 billion by 2032. The category includes both massive chillers that air condition entire data center halls to newer technologies that directly cool servers and equipment racks.

https://www.fortunebusinessinsights.com/industry-reports/data-center-cooling-market-101959

Startups are attracting significant investment ($24M–$50M) and partnerships with industry leaders, signaling market confidence. However, adoption requires collaboration between IT and sustainability teams to evaluate trade-offs. While direct-to-chip cooling offers long-term savings, its implementation may demand rethinking infrastructure and vendor relationships. For managers, this trend underscores the need to prioritize cooling solutions that align with both performance and ESG targets, as the demand for AI scales. The shift also highlights the role of innovation in reducing operational costs and environmental impact, making it a strategic investment for future-proofing digital operations. Good read!

[Read More]

Commenting in MySQL: Syntax, best practices, and examples

Categories

Tags sql database app-development learning

MySQL comments, in the structure of text, are ignored by the MySQL engine and used to explain and document your database code. They don’t affect query execution. You should use your database comments as a resource to other individuals who will review or modify the database. Comments should adhere to best practices to increase readability to other developers. By Devart.

Commenting in MySQL is a practice that enhances SQL code readability, maintainability, and documentation. It involves adding notes within the code to explain logic, purpose, and usage. The article then explains:

  • Understanding MySQL comments
  • Types of comments in MySQL
  • Best practices for commenting in MySQL
  • Adding comments to MySQL database structures

Proper and accurate commenting is important for maintaining consistency and readability in your code. Comments help you explain the logic behind queries and make it easier to understand for others. The article introduced the fundamentals of code commenting in MySQL, including comment syntax rules and comment types. Nice one!

[Read More]

The future of data processing: PostgreSQL evolution with YingJun Wu of Rising Wave

Categories

Tags streaming app-development software-architecture database cio

RisingWave, founded by YingJun Wu, is a stream processing system that has gained attention for its PostgreSQL compatibility and efficient resource utilization. Unlike traditional systems like Apache Flink, RisingWave leverages S3 as its primary storage, similar to Snowflake’s architecture. This approach not only reduces storage costs but also enables seamless elastic scaling. By Firebolt Team.

The conversation highlights a trend in the data engineering space—decoupling storage and compute, inspired by Snowflake, and leveraging open formats like Iceberg to reduce vendor lock-in and enhance flexibility. RisingWave is positioned to capitalize on this trend, providing a potentially powerful and cost-effective solution for stream processing workloads.

Use Cases and Benefits:

  • Streamlined Data Processing: Organizations can process high-volume data streams with lower resource requirements.
  • Easy Integration: PostgreSQL compatibility simplifies adoption for developers and reduces the learning curve.
  • Vendor Lock-in Mitigation: By utilizing Apache Iceberg for data storage, RisingWave helps organizations avoid vendor lock-in.

Limitations and Future Directions:

  • Data Ingestion: RisingWave’s reliance on S3 may introduce latency in certain scenarios.
  • Evolving Ecosystem: As the landscape around Apache Iceberg and stream processing continues to evolve, RisingWave’s adaptability will be crucial.

For teams using Snowflake or similar systems, RisingWave offers a way to redundant data copies and lower costs by processing data in real time and storing it in Iceberg. Its single-binary deployment (similar to DuckDB) reduces operational overhead, making it ideal for startups or companies needing rapid testing. However, the shift to Iceberg-based pipelines may require rethinking how data is structured, queried, and maintained. RisingWave’s focus on scalability and open formats aligns with modern trends but demands careful planning to avoid pitfalls like S3 performance bottlenecks. Overall, it represents a strategic move toward unified, flexible data infrastructure that balances innovation with practicality. Good read!

[Read More]

Serverless is not a primary

Categories

Tags serverless app-development web-development cio devops

The article emphasizes that while serverless is a powerful and exciting architectural pattern, it’s frequently misapplied. Organizations often hope serverless will solve their software delivery problems, but it’s actually an enabler that requires strong foundational practices to realize its benefits. By Seth Orell.

Think of it as a hierarchy:

  • Continuous Integration (CI): Frequent code merging, automated testing, and fast feedback.
  • Continuous Delivery (CD): Automating the release process so software is ready to deploy.
  • Continuous Deployment (CD): Automatically deploying changes to production.

You can’t achieve Continuous Delivery without first having Continuous Integration. Serverless architectures improve the “run” phase of the software lifecycle – things like scaling and infrastructure – but don’t inherently improve the “build” phase.

The author draws from DORA (DevOps Research and Assessment) to support this point. DORA metrics demonstrate a clear correlation between high-performing teams and their mastery of these fundamentals. Teams with robust CI/CD pipelines and a culture of ownership consistently deliver better software, faster, and with fewer bugs.

Implementing CI/CD requires cultural change – convincing engineers to embrace testing, code review, and automation requires buy-in. It also demands investment in tooling and processes. Serverless isn’t a magical solution; it’s a tool that requires strategic implementation alongside well-established software delivery practices. If your team is consistently missing deadlines or struggling with quality, focus on building those foundations first. Nice one!

[Read More]

Amazon DocumentDB serverless: Auto-scaling database solution for variable workloads

Categories

Tags serverless app-development web-development database aws

The primary goal of Amazon DocumentDB Serverless is to simplify operational overhead and reduce costs for developers using DocumentDB, a MongoDB-compatible database service. AWS recently announced the general availability (GA) of Amazon DocumentDB Serverless, an on-demand, auto-scaling configuration for Amazon DocumentDB. However, it’s important to note that while AWS markets it as “serverless,” it aligns more with an auto-scaling model rather than a scale-to-zero model, which is a key differentiator often associated with true serverless offerings. By Steef-Jan Wiggers.

It achieves this through an auto-scaling, on-demand capacity model, eliminating the need for users to provision read/write units and manage database infrastructure. Amazon claims DocumentDB Serverless can reduce database costs by up to 90% compared to provisioned capacity, particularly for workloads with intermittent or unpredictable traffic patterns.

This release is significant because it lowers the barrier to entry for using a document database like DocumentDB, making it more accessible for smaller projects or those with fluctuating demands, and shifts operational responsibility to Amazon. The serverless option also automatically scales to handle spikes in traffic without manual intervention, improving application responsiveness and reliability. Good read!

[Read More]

SwiftUI + Core animation: Demystify all sorts of groups

Categories

Tags ios app-development web-development frameworks swiftlang

The Core Animation framework serves as the infrastructure between the top-level UI frameworks and the underlying rendering and composition techniques, whether you are using UIKit, AppKit, or SwiftUI to lay out or draw your top-level UI. By Juniper Photon.

The article addresses the challenge of understanding the underlying behaviors of Groups in SwiftUI, specifically CompositingGroup, DrawingGroup, and GeometryGroup. These groups can be confusing, and their names do not clearly indicate their functionality. The article introduces a technique for inspecting CALayers in a SwiftUI app, which helps to demystify the behaviors of these groups. By using the “Debug View Hierarchy” tool in Xcode and selecting “Show Layers,” developers can visualize the CALayer hierarchy and understand how SwiftUI views are rendered.

The article presents several key findings:

  • CompositingGroup creates a super CALayer with a specified opacity, allowing its sublayers to be treated as a single unit.
  • DrawingGroup composites a view’s contents into an offscreen image, flattening the view hierarchy into a single CALayer.
  • GeometryGroup isolates a view’s geometry, allowing its descendants to maintain a stable layout during animation.

These findings advance the field by providing a deeper understanding of the underlying rendering mechanisms in SwiftUI. By choosing the right group for their UI needs, developers can avoid common rendering pitfalls, optimize performance, and create more efficient and effective user interfaces. Nice one!

[Read More]