Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Introduction to Linear Programming in Python

Categories

Tags python app-development programming open-source

A guide to mathematical optimization with Google OR-Tools. Linear programming is a technique to optimize any problem with multiple variables and constraints. It’s a simple but powerful tool every data scientist should master. By Maxime Labonne.

Fortunately for us, there is a method that can solve our problem in an optimal way: linear programming (or linear optimization), which is part of the field of operations research (OR). In this article, we’ll use it to find the best numbers of swordsmen, bowmen, and horsemen to build the army with the highest power possible.

You can run the code from this tutorial with the following Google Colab notebook.

The article then describes:

  • Solvers
  • Variables
  • Constraints
  • Objective
  • Optimize!

In Python, there are different libraries for linear programming such as the multi-purposed SciPy, the beginner-friendly PuLP, the exhaustive Pyomo, and many others. Excellent read!

[Read More]

Three myths of open source software risk and the one nobody is discussing

Categories

Tags web-development app-development miscellaneous open-source

Open source software is being vilified once again and, in some circles, even considered a national security threat. Open source software risk has been a recurring theme: First it was classified as dangerous because anyone could work on it and then it was called insecure because nobody was in charge. By Josh Bressers.

Let’s consider where open source stands today. It’s running at minimum 80% of the world. Probably more. Some of the most mission-critical applications and services on the planet (and on Mars) are open source. The reality is, open source software isn’t inherently more risky than anything else. It’s simply misunderstood, so it’s easy to pick on.

The article then discusses:

  • Myth 1: Open source software is a risk because it isn’t secure
  • Myth 2: Open source software is a risk because it isn’t high quality
  • Myth 3: Open source software is a risk because you can’t trust the people writing it
  • The true risk of open source software

In an era where the use of open source software is only increasing, the true risk in using open source — or any software for that matter – is failing to understand how it works. In the early days of open source, we could only understand our software by creating it. There wasn’t a difference between being an open source user and an open source contributor. Nice one!

[Read More]

New superconductors could make faster Quantum computers

Categories

Tags programming cio app-development miscellaneous software-architecture

Practical quantum computers could soon arrive with profound implications for everything from drug discovery to code-breaking. By Sascha Brodsky.

One of the biggest challenges in quantum computing today relates to how we can make superconductors perform even better.One of the biggest challenges in quantum computing today relates to how we can make superconductors perform even better.

Making practical quantum computers could hinge on finding better ways to use superconducting materials which have no electrical resistance. In a step toward building better quantum machines, researchers at Oak Ridge National Laboratory recently measured the electrical current between an atomically sharp metallic tip and a superconductor. This new method can find linked electrons with extreme precision in a move that could help detect new kinds of superconductors, which have no electrical resistance.

Better superconductors may be key to making practical quantum computers. Michael Biercuk, the CEO of quantum computing company Q-CTRL, said in an email interview that most current quantum computing systems use niobium alloys and aluminum, in which superconductivity was discovered in the 1950s and 1960s.

While we see small advances in each of the indicated technological directions, combining them into a good working device is still elusive. The ‘Holy Grail’ of quantum computing is a device with hundreds of qubits and low error rates. Scientists can’t agree on how they will achieve this goal, but one possible answer is using superconductors. Interesting read!

[Read More]

Don't mix refactorings with behavior changes

Categories

Tags programming code-refactoring software app-development devops

Probably the biggest reason not to mix refactorings with behavior changes is that it makes it too easy to make a mistake. By Jason Swett.

When you look at the diff between the before and after versions of a piece of code, it’s not always obvious what the implications of that change are going to be. The less obvious the implications are, the more opportunity there is for a bug to slip through.

When you mix refactoring with behavior changes, it’s hard or impossible for a reviewer to tell which is which. It makes a discussion about a code change harder because now the conversation is about two things, not just one thing. This makes for a potentially slow and painful PR review process.

How to approach refactorings instead:

  • Set aside my current feature branch
  • Create a new branch off of master on which to perform my refactoring
  • Merge my refactoring branch to master (and preferably deploy master to production as well)
  • Merge or rebase master into my feature branch
  • Resume work on my feature branch

If developer deploys a behavior change that was mixed with a refactoring, and then discovers that the deployment introduced a bug, she/he won’t know whether it was the refactoring or the behavior change that was responsible because the two were mixed together. Good read!

[Read More]

Ten best practices for refactoring code

Categories

Tags web-development app-development programming performance code-refactoring

As software developers, we are constantly faced with the need to improve and optimize our code. Whether it’s for performance, readability, or maintainability, refactoring code is an essential skill. By Tomek Skupiński.

There are a number of different techniques that can be used when refactoring code. In this article, we will explore some of the best practices for refactoring code. The blog post then focuses on:

  • Identify the problem areas
  • Make a plan
  • Keep your changes small
  • Write tests
  • Refactor incrementally
  • Use a refactoring tool
  • Document your changes
  • Use a source control system
  • Perform regression testing
  • Be prepared to undo changes

Refactoring code is an essential skill for every software developer. By following the best practices outlined in this article, you can make sure that you won’t get lost in the process. Nice one!

[Read More]

How to monitor Docker with Telegraf and InfluxDB

Categories

Tags monitoring docker containers app-development devops

Docker is an increasingly popular choice for businesses dealing with containerized applications. However, as with any new technology, Docker introduces complexities that need to be managed. Some of these complexities relate to infrastructure and application monitoring. Due to the abstraction offered by containers, traditional monitoring solutions might not be suitable for Docker-based workloads. By Cameron Pavey

The article provides information on the following:

  • Why monitor Docker
  • Prerequisites
  • Monitoring Docker with InfluxDB and Telegraf

As a time series database, InfluxDB is perfectly positioned to store and visualize the kind of metrics that application monitoring often deals with, as there are usually lots of data points at regular intervals. With large volumes of data like this, you must have a mechanism to visualize, search, and understand the data to derive insights. InfluxDB fits the bill in this regard thanks to its easy-to-configure visualizations and the powerful Flux data scripting language that allows you to query and analyze your data. Good read!

[Read More]

A beginner's guide to benchmarking with NoSQLBench

Categories

Tags monitoring tdd nosql app-development devops

There are several benchmarking tools in the market but most of them require esoteric coding knowledge. NoSQLBench is simple to use while providing sophisticated benchmarking for Cassandra and other NoSQL databases. It provides results within minutes. By Jones-Gilardi.

In this post, you’ll get hands-on experience with benchmarking and stress testing Cassandra using NoSQLBench. Rather than going in-depth, our tutorial will scratch the surface and cover:

  • Understanding parameters and key metrics for benchmarking
  • How cycles, bindings and statements work together
  • Experimenting with stdout
  • Scaling up a test and customizing your own scenarios
  • Packaging a performance test with named scenarios

NoSQLBench is an open-source, pluggable testing tool for the NoSQL ecosystem. It’s primarily designed to test Cassandra, but you can also use it for other NoSQL technology like Apache Kafka, MongoDB, and DataStax Astra DB. Good read!

[Read More]

A deep dive into OpenTelemetry metrics

Categories

Tags monitoring cloud cio app-development devops

OpenTelemetry is an open-source observability framework for infrastructure instrumentation hosted by the Cloud Native Computing Foundation (CNCF). The project gained a lot of momentum with contributions from all major cloud providers (AWS, Google, Microsoft) as well as observability vendors (including Timescale) to the point it became the second-highest ranked CNCF project by activity and contributors, only coming second to Kubernetes itself. By James Blackwood-Sewell.

Diagram illustrating the elements of the MeterProvider in OpenTelemetry

Source: https://www.cncf.io/blog/2022/06/08/a-deep-dive-into-opentelemetry-metrics/ OpenTelemetry aims to define a single standard across all types of observability data (which it refers to as signals), including metrics, logs, and traces. Through a collection of tools, libraries, APIs, SDKs, and exporters, OpenTelemetry radically simplifies the process of collecting signals from your services and sending them to the backend of your choice, opening the doors of observability to a wider range of users and vendors.

  • OpenTelemetry metrics
  • Measurements to metrics
  • Instruments and emitting measurements
  • Views and aggregations

A View in OpenTelemetry defines an aggregation, which takes a series of measurements and expresses them as a single metric value at that point in time. As more measurements are created, the metric is continuously updated. If there is no View created for an Instrument, then a default aggregation is selected based on the Instrument type. Custom views can be targeted by Meter name, Instrument name, Instrument type, or with a wildcard. Nice one!

[Read More]

How Power BI metrics and scorecards can transform productivity within business objectives

Categories

Tags cloud cio management monitoring

Metric and goal setting is paramount when formulating business operations. That’s why Microsoft Metrics (previously known as “Goals”) make it possible to keep every member of a team striving toward a singular, unified key objective and ensures a higher probability of positive outcomes. By Jocelyn Porter.

Currently, most goal tracking systems require manual updates and are not immediately connected to a business’ data source. This makes it difficult to not only maintain metrics, but to dissect them for further analysis. Thankfully, Metrics in Power BI allow businesses to do just that – to take a look inside the data when further analysis is required.

The article then explains:

  • Power BI metrics
  • Creating and sharing metrics
  • Metric and Details Pane
  • Status rules
  • Utilizing submetrics
  • Data metrics
  • Scorecards and workspaces

By fully utilizing Power BI Metrics and Scorecards, you can revolutionize productivity in your work environment. Microsoft Power BI Metrics is a fully customizable and shareable experience in tracking KPI’s that allow businesses to develop a strong platform in aligning business objectives and actionable insights. Good read!

[Read More]

Blockchain scalability: Execution, storage, and consensus

Categories

Tags cloud blockchain teams performance data-science learning

Trust minimization is a valuable security property that blockchain technology is uniquely positioned to generate—replacing handshakes, brand reputation, and paper contracts with guarantees based on computer code, cryptography, and decentralized consensus. These superior guarantees provided by blockchains form the basis of cryptographic truth. By chain.link.

Them main bits of information in this article:

  • Blockchains vs. traditional computing
  • Three key properties of blockchain scaling
  • Scaling the execution layer
  • Scaling data storage
  • Scaling consensus
  • A scalable and secure cross-chain future

Blockchains have succeeded in bringing trust minimization to new use cases including monetary policy (e.g. Bitcoin) and digital asset trading (e.g. DEXs). However, blockchains have historically struggled to maintain trust minimization for use cases that require speeds and costs comparable to traditional computing systems. These scalability limitations can be felt by users in the form of high transaction costs and cause developers to doubt whether blockchains can support high-value use cases that hinge on handling data in real time. Good read!

[Read More]