Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Background of Coinbase's May 2025 breach

Categories

Tags infosec blockchain fintech crypto cio

Coinbase, America’s largest cryptocurrency exchange, received an unsolicited email from an unknown threat actor on May 11, 2025. They claimed to possess sensitive information about its customers and demanded a ransom of $20 million. By Dilip Kumar Patairya.

In May 2025, Coinbase was breached via an unsolicited email alleging possession of customer data. Attackers recruited overseas Indian customer service agents to exfiltrate sensitive information gradually. A 20M ransom demand on May 11 led to increased scrutiny. By May 21,attackers transferred 42.5M between Bitcoin and Ethereum using THORChain.

Coinbase’s comprehensive response included:

  • A $20M reward fund for actionable intelligence leading to arrests
  • Full reimbursement commitments (up to $400M in estimated costs) paired with one-year credit monitoring and identity restoration services
  • Enhanced account security requiring multi-factor verification for large withdrawals, coupled with scam-awareness prompts
  • Expansion of a U.S.-based support hub fortified with rigorous security protocols across all operations
  • Transparent collaboration with federal and international law enforcement, culminating in internal terminations and criminal referrals of involved insiders

Coinbase’s internal security team detected anomalies, terminated complicit employees, and publicly denied the ransom in an SEC filing. The breach impacted 69,461 accounts, exposing names, emails, masked financial identifiers, and transaction histories—but not private keys or wallet access. In the wake of large-scale data breaches of crypto platforms, you should take proactive steps to protect yourself from social engineering attacks - read article for a good advice on this last point. Interesting read!

[Read More]

Bitcoin Core to unilaterally remove controversial OP-Return limit

Categories

Tags app-development blockchain fintech crypto infosec

In 2014, crypto advertising barely existed. The term “Web3” hadn’t been coined, Facebook banned crypto ads and startups promoting their tokens were mostly confined to forums and niche publications. But for Bitmedia founder Matvii Diadkov, the opportunity was obvious. By Martin Young.

Bitmedia now uses AI for fraud detection, creative analysis, and predictive bidding, aiming to let autonomous agents manage campaigns using real-time blockchain data. Compliance is handled via geo-targeted moderation systems to adhere to regional regulations. A DeFi client saw a 34% drop in cost-per-acquisition and 3.5x user retention by targeting wallet segments active on DEXs or staking.

Some key larning in this article:

  • Bitmedia revolutionized Web3 marketing through AI and blockchain data
  • Compliance requires adaptive moderation across jurisdictions
  • Onchain analytics drive precise user targeting and retention
  • Continuous optimization ensures sustainable campaign success
  • Stealth innovation may redefine crypto advertising transparency and scalability

Long-term success stems from daily optimization of creatives, targeting, and bidding using blockchain analytics. Diadkov also launched Chainers.io, an NFT game, blending marketing insights with gaming experience. Future plans include a groundbreaking stealth tech that redefines crypto-ad intersections via blockchain transparency. Good read!

[Read More]

Compose multiplatform for iOS is stable and production-ready

Categories

Tags app-development android kotlin ios java jvm

Kotlin Multiplatform 1.8.0 stabilizes Compose for iOS, marking a milestone in cross-platform development. With this update, Kotlin Multiplatform becomes a complete solution for mobile development, enabling flexible code sharing across both business logic and UI without compromising app quality or losing control over platform-specific capabilities. By Ekaterina Petrova.

Key highlights include:

  • UI parity with Jetpack Compose, deep navigation linking, and accessibility support (VoiceOver, assistive tools).
  • Performance optimizations ensuring rapid startup, smooth scrolling, and minimal app size (~9MB extra vs native SwiftUI).
  • Ecosystem growth, with expanding libraries for architecture, DI, image loading, etc. (klibs.io).
  • Tooling improvements like Compose Hot Reload for instant UI iterations and upcoming IDE plugins.
  • Native feel via system-integrated scrolling, text editing, drag-and-drop, and adaptive UIs.

Successful adoption is evidenced by apps like Respawn (96% code reuse with Android) and high-performance benchmarks. Web support continues to evolve, promising polished experiences soon. Teams can now build production-ready iOS apps using Compose Multiplatform without compromising on platform-specific features. Start leveraging this robust framework today!

[Read More]

Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs

Categories

Tags azure cloud ai cio big-data

Large language models (LLMs) have demonstrated remarkable capabilities in reasoning, language understanding, and even creative tasks. Yet, a key challenge persists: how to efficiently integrate external knowledge. By Taketomo Isazawa.

KBLaM introduces “rectangular attention,” an extension of standard transformer attention. This mechanism integrates structured knowledge from external triples into LLMs as learnable key-value pairs, significantly boosting efficiency and scalability over traditional RAG or in-context learning for large KBs.

Unlike fine-tuning (costly retraining) or basic RAG (separate retrieval modules causing complexity), KBLaM encodes facts offline using JSON extraction and probabilistic clustering. These encoded knowledge tokens are then inserted into the LLM’s attention layers via rectangular attention, where user prompts attend to them, but they do not attend among themselves.

This allows dynamic retrieval during inference without retraining. Critically, it achieves linear scaling in memory and computational cost (inference) with KB size, whereas standard approaches incur quadratic costs. This efficiency enables integrating vast amounts of knowledge (thousands of facts) on a single GPU much more effectively than alternatives, enhancing reliability by teaching the LLM to refuse questions lacking necessary information. Nice one!

[Read More]

How much observability is enough?

Categories

Tags devops cloud kubernetes cio containers

Observability is the process of watching what your systems do at every layer so that you can build a comprehensive picture of how it does what it does. By Dotan Horovits and Jujhar Singh.

In a popular episode of OpenObservability Talks podcast, host Dotan Horovits, Logz.io’s principal technology evangelist, was joined by guest Jujhar Singh, at the time global DevSecOps practice lead at The Economist and currently a lead DevOps and infrastructure consultant at Thoughtworks. Their conversation was focused on understanding how much observability is enough, including investment and stakeholder adoption.

The podcast discussion focuse is on:

  • Why is observability important?
  • What is the minimum observability needed?
  • The human factor of implementing observability
  • Set clear objectives, consolidate tooling

You must first understand how much observability is enough for your needs and what role different observability tools will have within your organization. Good read!

[Read More]

Python adopts standard lock file format for reproducible installs

Categories

Tags python cloud infosec devops

Python’s ecosystems now have a standardized lock file format called pyproject.lock (or pylock.toml) defined by PEP 751. This was formally adopted after the proposal was accepted. By Sarah Gooding.

The main goal is improved reproducible environments, especially in CI/CD and deployment. It addresses past issues with fragmented tooling using formats like requirements.txt. The new format aims to be:

  • Tool-agnostic: Suitable for any installer.
  • Machine-generated but human-readable.
  • Secure: Mandatory file hashes for verification, unlike optional requirements.txt hashes.

pylock.toml records exact package versions, file hashes, sizes, download locations (wheel/sdist), platform constraints, extras, and dependency groups. This allows installers to perform installs predictably without needing complex resolution each time.

The adoption standardizes lock files across tools like Poetry, PDM, or uv that generate them (lockers) and any tool that consumes them (installers). It enhances supply chain security by providing verifiable details about package sources and upload times. This is expected to improve reliability and become a key feature for packaging tools in the future. Good read!

[Read More]

Retrieval Augmented Generation (RAG) tutorial for beginners

Categories

Tags machine-learning data-science big-data ai learning

Retrieval-augmented Generation (RAG) is an AI approach that improves machine understanding and response accuracy. By integrating traditional AI language models with real-time retrieval of relevant external data, RAG bridges knowledge gaps, enabling more precise and contextually rich answers. By Vidhi Gupta.

This article introduces Retrieval Augmented Generation (RAG), a powerful technique combining Large Language Models (LLMs) with external data retrieval. Unlike static LLMs that can hallucinate or provide outdated info, RAG dynamically pulls relevant information from trusted sources before generating responses.

Key benefits include: * Improved Accuracy: Reduces errors (“hallucinations”) by grounding answers in verified data. * Real-Time Data: Ensures responses use the most current knowledge available. * Enhanced Context: Leverages existing human-made content and expert knowledge bases for richer, more relevant outputs.

Common applications involve chatbots providing reliable customer support, summarizing research (e.g., legal or medical), translating languages accurately based on domain context, and personal assistants handling complex tasks using integrated information. Nice one!

[Read More]

Learnings from a machine learning engineer — data

Categories

Tags machine-learning data-science big-data how-to learning

Practical insights for a data-driven approach to model optimization. By David Martin.

The author emphasizes that data is fundamental for successful machine learning models, often overlooked compared to complex model architecture. Drawing from experience building image classification systems, particularly one identifying over 1,500 zoo animal classes with high accuracy, they stress the critical need for “good” and “correct” training data.

Good training data requires:

  • Subject Clarity: Animals must be clearly visible and identifiable (front and center), avoiding obscured features or multiple subjects. Ensure key distinguishing characteristics are prominent.
  • Correct Labels: Labels must accurately reflect the image content, especially since even subject matter experts can err. The ML engineer plays a crucial role in label quality assurance.

Handling bad data is essential – images that don’t clearly show the main object (like an open field with a zebra) or contain errors should be removed or flagged as “Unknown”.

Pragmatic strategies include: * Using synthetic image augmentation techniques early, like zooming to capture detail. * Temporarily merging similar classes during development if data is sparse for one species, accepting the trade-off of generic identification. * Bulk label generation by models can speed up labelling, even with less-perfect models.

These practices form the bedrock of a reliable ML application. The next part will focus on creating specific datasets and evaluating the model effectively in production. Nice one!

[Read More]

Can vibe coding produce production-grade software?

Categories

Tags ai programming miscellaneous how-to learning

Thoughtworks explored “vibe coding,” where an AI generates software from minimal functional requirements without detailed architectural guidance. They tested this approach through three experiments building the System Update Planner application. By Premanand Chandrasekaran.

  • Vibe Coding (Exp1): Allowed full autonomy; generated basic but hard-to-maintain code with low test coverage and poor structure, struggling significantly with incremental changes.
  • High Discipline (Exp2): Imposed TDD, type safety, modularity, and commit hygiene; produced much better quality code aligned with production standards, though AI still occasionally reverted to unstructured habits needing human oversight and feedback loops.
  • Conversational Design (Exp3): Disabled tool memory, enabled richer architectural discussions; resulted in the cleanest, most maintainable and modular code.

The experiments highlight that while structure, guidance, and collaboration significantly improve AI-generated code quality, more disciplined prompting is crucial. Key takeaways:

  • Human intent and engineering discipline are essential for good results.
  • Collaboration (talking through design) yields better outcomes than pure autonomy.
  • AI models still need refinement to inherently optimize for rigorous standards.

Future development may involve AI as a reliable teammate, potentially shifting towards smaller, more replaceable code modules due to evolving tool capabilities and needs. Good read!

[Read More]

Fourteen advanced Python features

Categories

Tags programming python how-to learning

Python is one of the most widely adopted programming languages in the world. Yet, because of it’s ease and simplicity to just “get something working”, it’s also one of the most underappreciated. By Edward Li.

The article will give overview of features like:

  • Typing overloads
  • Keyword-only and positional-only arguments
  • Future annotations
  • Generics
  • Protocols
  • Context managers
  • Structural pattern matching

… and more. 14 of some of the most interesting & underrated Python features that author has encountered in my Python career. You will also get links to additional resources. Interesting read!

[Read More]