Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

PHP 8.5 adds pipe operator: What it means

Categories

Tags php cloud software-architecture app-development web-development

PHP 8.5, due out November of this year, will bring with it another long-sought-after feature: the pipe operator (|>). It’s a small feature with huge potential, yet it still took years to happen. By Larry Garfield.

On its own, that is not all that interesting. Where it becomes interesting is when it is repeated, or chained, to form a “pipeline.” For example, here’s real code example:

$arr = [
  new Widget(tags: ['a', 'b', 'c']),
  new Widget(tags: ['c', 'd', 'e']),
  new Widget(tags: ['x', 'y', 'a']),
];

$result = $arr
    |> fn($x) => array_column($x, 'tags') // Gets an array of arrays
    |> fn($x) => array_merge(...$x)       // Flatten into one big array
    |> array_unique(...)                  // Remove duplicates
    |> array_values(...)                  // Reindex the array.
;

// $result is ['a', 'b', 'c', 'd', 'e', 'x', 'y']

The article further explains:

  • What is a pipe operator?
  • Where did it come from?
  • More than the sum of its parts
  • What comes next?

The first is a second attempt at Partial Function Application. This is a larger feature, but with first-class callables already bringing in much of the necessary plumbing, which simplifies the implementation. With pipes now providing a natural use case, as well as easy optimization points, it’s worth a second attempt. Whether it makes it into PHP 8.5, is delayed to 8.6, or is again rejected is still an open question as of this writing, though I am hopeful. Major thanks to Arnaud Le Blanc from the PHP Foundation team for picking it up to update the implementation. Interesting read!

[Read More]

Four hot startups aim to keep AI data centers cool

Categories

Tags startups cloud cio performance

Data centers consume up to one-third of their energy for cooling, driving demand for innovative solutions. Direct-to-chip cooling, a emerging technology, uses cold plates to target heat sources directly on CPUs/GPUs, improving efficiency and sustainability. Traditional cooling relies on large chillers and water systems, while direct-to-chip cooling attaches cold plates to processors. These plates use liquid or advanced materials to absorb heat locally, reducing reliance on centralized cooling and lowering energy use. By Heather Clancy.

Yet, the market for data center cooling technologies is poised to double over the next seven years, reaching a projected $42.5 billion by 2032. The category includes both massive chillers that air condition entire data center halls to newer technologies that directly cool servers and equipment racks.

https://www.fortunebusinessinsights.com/industry-reports/data-center-cooling-market-101959

Startups are attracting significant investment ($24M–$50M) and partnerships with industry leaders, signaling market confidence. However, adoption requires collaboration between IT and sustainability teams to evaluate trade-offs. While direct-to-chip cooling offers long-term savings, its implementation may demand rethinking infrastructure and vendor relationships. For managers, this trend underscores the need to prioritize cooling solutions that align with both performance and ESG targets, as the demand for AI scales. The shift also highlights the role of innovation in reducing operational costs and environmental impact, making it a strategic investment for future-proofing digital operations. Good read!

[Read More]

Commenting in MySQL: Syntax, best practices, and examples

Categories

Tags sql database app-development learning

MySQL comments, in the structure of text, are ignored by the MySQL engine and used to explain and document your database code. They don’t affect query execution. You should use your database comments as a resource to other individuals who will review or modify the database. Comments should adhere to best practices to increase readability to other developers. By Devart.

Commenting in MySQL is a practice that enhances SQL code readability, maintainability, and documentation. It involves adding notes within the code to explain logic, purpose, and usage. The article then explains:

  • Understanding MySQL comments
  • Types of comments in MySQL
  • Best practices for commenting in MySQL
  • Adding comments to MySQL database structures

Proper and accurate commenting is important for maintaining consistency and readability in your code. Comments help you explain the logic behind queries and make it easier to understand for others. The article introduced the fundamentals of code commenting in MySQL, including comment syntax rules and comment types. Nice one!

[Read More]

The future of data processing: PostgreSQL evolution with YingJun Wu of Rising Wave

Categories

Tags streaming app-development software-architecture database cio

RisingWave, founded by YingJun Wu, is a stream processing system that has gained attention for its PostgreSQL compatibility and efficient resource utilization. Unlike traditional systems like Apache Flink, RisingWave leverages S3 as its primary storage, similar to Snowflake’s architecture. This approach not only reduces storage costs but also enables seamless elastic scaling. By Firebolt Team.

The conversation highlights a trend in the data engineering space—decoupling storage and compute, inspired by Snowflake, and leveraging open formats like Iceberg to reduce vendor lock-in and enhance flexibility. RisingWave is positioned to capitalize on this trend, providing a potentially powerful and cost-effective solution for stream processing workloads.

Use Cases and Benefits:

  • Streamlined Data Processing: Organizations can process high-volume data streams with lower resource requirements.
  • Easy Integration: PostgreSQL compatibility simplifies adoption for developers and reduces the learning curve.
  • Vendor Lock-in Mitigation: By utilizing Apache Iceberg for data storage, RisingWave helps organizations avoid vendor lock-in.

Limitations and Future Directions:

  • Data Ingestion: RisingWave’s reliance on S3 may introduce latency in certain scenarios.
  • Evolving Ecosystem: As the landscape around Apache Iceberg and stream processing continues to evolve, RisingWave’s adaptability will be crucial.

For teams using Snowflake or similar systems, RisingWave offers a way to redundant data copies and lower costs by processing data in real time and storing it in Iceberg. Its single-binary deployment (similar to DuckDB) reduces operational overhead, making it ideal for startups or companies needing rapid testing. However, the shift to Iceberg-based pipelines may require rethinking how data is structured, queried, and maintained. RisingWave’s focus on scalability and open formats aligns with modern trends but demands careful planning to avoid pitfalls like S3 performance bottlenecks. Overall, it represents a strategic move toward unified, flexible data infrastructure that balances innovation with practicality. Good read!

[Read More]

Serverless is not a primary

Categories

Tags serverless app-development web-development cio devops

The article emphasizes that while serverless is a powerful and exciting architectural pattern, it’s frequently misapplied. Organizations often hope serverless will solve their software delivery problems, but it’s actually an enabler that requires strong foundational practices to realize its benefits. By Seth Orell.

Think of it as a hierarchy:

  • Continuous Integration (CI): Frequent code merging, automated testing, and fast feedback.
  • Continuous Delivery (CD): Automating the release process so software is ready to deploy.
  • Continuous Deployment (CD): Automatically deploying changes to production.

You can’t achieve Continuous Delivery without first having Continuous Integration. Serverless architectures improve the “run” phase of the software lifecycle – things like scaling and infrastructure – but don’t inherently improve the “build” phase.

The author draws from DORA (DevOps Research and Assessment) to support this point. DORA metrics demonstrate a clear correlation between high-performing teams and their mastery of these fundamentals. Teams with robust CI/CD pipelines and a culture of ownership consistently deliver better software, faster, and with fewer bugs.

Implementing CI/CD requires cultural change – convincing engineers to embrace testing, code review, and automation requires buy-in. It also demands investment in tooling and processes. Serverless isn’t a magical solution; it’s a tool that requires strategic implementation alongside well-established software delivery practices. If your team is consistently missing deadlines or struggling with quality, focus on building those foundations first. Nice one!

[Read More]

Amazon DocumentDB serverless: Auto-scaling database solution for variable workloads

Categories

Tags serverless app-development web-development database aws

The primary goal of Amazon DocumentDB Serverless is to simplify operational overhead and reduce costs for developers using DocumentDB, a MongoDB-compatible database service. AWS recently announced the general availability (GA) of Amazon DocumentDB Serverless, an on-demand, auto-scaling configuration for Amazon DocumentDB. However, it’s important to note that while AWS markets it as “serverless,” it aligns more with an auto-scaling model rather than a scale-to-zero model, which is a key differentiator often associated with true serverless offerings. By Steef-Jan Wiggers.

It achieves this through an auto-scaling, on-demand capacity model, eliminating the need for users to provision read/write units and manage database infrastructure. Amazon claims DocumentDB Serverless can reduce database costs by up to 90% compared to provisioned capacity, particularly for workloads with intermittent or unpredictable traffic patterns.

This release is significant because it lowers the barrier to entry for using a document database like DocumentDB, making it more accessible for smaller projects or those with fluctuating demands, and shifts operational responsibility to Amazon. The serverless option also automatically scales to handle spikes in traffic without manual intervention, improving application responsiveness and reliability. Good read!

[Read More]

SwiftUI + Core animation: Demystify all sorts of groups

Categories

Tags ios app-development web-development frameworks swiftlang

The Core Animation framework serves as the infrastructure between the top-level UI frameworks and the underlying rendering and composition techniques, whether you are using UIKit, AppKit, or SwiftUI to lay out or draw your top-level UI. By Juniper Photon.

The article addresses the challenge of understanding the underlying behaviors of Groups in SwiftUI, specifically CompositingGroup, DrawingGroup, and GeometryGroup. These groups can be confusing, and their names do not clearly indicate their functionality. The article introduces a technique for inspecting CALayers in a SwiftUI app, which helps to demystify the behaviors of these groups. By using the “Debug View Hierarchy” tool in Xcode and selecting “Show Layers,” developers can visualize the CALayer hierarchy and understand how SwiftUI views are rendered.

The article presents several key findings:

  • CompositingGroup creates a super CALayer with a specified opacity, allowing its sublayers to be treated as a single unit.
  • DrawingGroup composites a view’s contents into an offscreen image, flattening the view hierarchy into a single CALayer.
  • GeometryGroup isolates a view’s geometry, allowing its descendants to maintain a stable layout during animation.

These findings advance the field by providing a deeper understanding of the underlying rendering mechanisms in SwiftUI. By choosing the right group for their UI needs, developers can avoid common rendering pitfalls, optimize performance, and create more efficient and effective user interfaces. Nice one!

[Read More]

Microservices for machine learning

Categories

Tags microservices machine-learning big-data cloud agile

Learn how author scaled my ML-powered finance tracker by breaking a monolithic design into microservices for better performance, maintainability, and deployment. Author’s finance tracker project started with a simple idea: automatically categorize bank transactions using a text classification model. Author trained a basic logistic regression model on my transaction history, wrapped it in a Flask API, and called it done. By Ramya Boorugula.

The article proposes decomposing the ML stack into well‑defined microservices: a Feature‑Engineering service, a Model‑Inference service, a Training‑Pipeline service, and a Monitoring/Scaling service. Each microservice exposes a lightweight API, is containerized, and is orchestrated with Kubernetes or a serverless platform.

By separating responsibilities, organizations can update a single service (e.g., swap a new model version) without redeploying the entire stack, enabling faster A/B testing and roll‑backs. Experiments show a 30 % reduction in deployment time and a 25 % increase in system throughput when compared to a traditional monolith, though orchestration overhead and inter‑service communication introduce some latency.

Adopting microservices for ML moves teams toward a production‑ready MLOps pipeline that supports continuous delivery, independent scaling, and fine‑grained observability—key enablers for rapid model iteration and resilient AI applications. Nice one!

[Read More]

How to evaluate graph retrieval in MCP agentic systems

Categories

Tags ai bots app-development web-development frameworks data-science

Agentic systems, which leverage tools like knowledge graphs, often struggle with effectively retrieving relevant information from these graphs to inform their decision-making process, leading to inaccuracies and inefficiencies. This article addresses the need for robust evaluation of graph retrieval components within these systems. By Tomaz Bratanic.

The authors propose a new evaluation framework, Graph Retrieval Evaluation Pipeline (GREP), built on LangChain, designed to systematically assess graph retrieval performance across various query types and graph structures. GREP introduces automated tests focusing on relevance, completeness, and faithfulness of retrieved subgraph information.

Experiments utilizing GREP on a medicine-focused knowledge graph reveal that current retrieval methods exhibit weaknesses in handling complex queries requiring multi-hop reasoning and often provide incomplete or irrelevant information. The tests highlight the sensitivity of performance to the specific query formulation used.

This work provides a crucial tool for developers building agentic systems, helping them pinpoint weaknesses in their graph retrieval modules and improve the reliability and accuracy of their agents. GREP facilitates faster iteration and more targeted optimization of these systems, especially for knowledge-intensive tasks. Good read!

[Read More]

The myth of complexity: why microservice architecture doesn't work for you

Categories

Tags microservices devops app-development web-development monitoring

This article parks a debate about the appropriateness of the microservices approach. While microservices are often touted as the key to scalability and agility, author suggests that the architectural pattern can become a hindrance rather than a help, particularly if implemented without careful consideration. By Dorota Parad.

The article identifies the significant complexities introduced by decentralized architectures, including the challenges of distributed tracing, maintaining data consistency across services, and managing the increased deployment complexity. These factors can lead to higher development and operational costs.

Main points are:

  • Microservices are overhyped
  • Increased complexity: Microservices architectures introduce significant operational complexity (distributed tracing, inter-service communication, data consistency, deployment).
  • Higher costs: The complexity translates to increased development, operational, and maintenance costs.
  • Monoliths can be viable: A well-structured monolithic architecture can be a more efficient and cost-effective solution, especially for smaller projects or teams.
  • Pragmatic approach needed: Organizations should carefully evaluate their needs and capabilities before adopting microservices, rather than following trends blindly.
  • Focus on business value: Architectural decisions should be driven by tangible business value, not just by the desire to implement a popular architectural pattern.

Author argues that many organizations are drawn to microservices simply because they are fashionable, without fully appreciating the organizational and technical investment required. She suggests that a well-designed monolithic architecture can often provide a more efficient and maintainable solution, especially for smaller teams or applications with relatively low complexity. The article ultimately calls for a pragmatic evaluation process, urging teams to analyze their specific needs and infrastructure carefully before adopting microservices and prioritizing tangible business benefits over architectural buzzwords. Nice one!

[Read More]