Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Running VMware Tanzu RabbitMQ on VMware Tanzu Kubernetes Grid

Categories

Tags cloud containers kubernetes devops distributed

Whether you’re integrating multiple microservices or building a new streaming app, you’ll need a modern messaging and streaming service. RabbitMQ is one of the most popular open-source messaging and streaming brokers. By Yimeng Liu.

Tanzu RabbitMQ is a fast and dependable messaging and streaming system that supports a wide range of use cases, including reliable integration, content-based routing, global data delivery, high-volume monitoring and data ingestion. With Tanzu RabbitMQ for Kubernetes, developers can provision both the Tanzu RabbitMQ and the open-source RabbitMQ message brokers with simple commands on top of any Kubernetes cluster. The Operator works automatically with the Kubernetes runtime to maintain the desired cluster state.

The article is about:

  • What is Tanzu RabbitMQ?
  • Why Run Tanzu RabbitMQ on Tanzu Kubernetes Grid?
  • How to Deploy Tanzu RabbitMQ on TKG?
  • Tanzu RabbitMQ capabilities
  • Tanzu RabbitMQ observability
  • Performance

In this solution, we deployed Tanzu RabbitMQ clusters on Tanzu Kubernetes Grid that provides the simplified operation of servicing cloud native workloads and can scale without compromise. Running Tanzu RabbitMQ on Tanzu Kubernetes Grid provides self-service deployment with automated operations, full observability, and fast time to recovery; thus, the solution increases your business continuity and security in any environment. Nice one!

[Read More]

Tiered datastore solution for high data growth MySQL using Distributed SQL Databases (DSQL)

Categories

Tags cloud database sql cio distributed

Usually, entities like order, order items etc tend to grow substantially year on year as we scale more and serve large customers. Generally, MySQL is the widely used datastore due to its durable & ACID guarantees. While MySQL is a brilliant tech stack, it comes with the overhead of data maintenance. By Manohar K.

Warm Store: This is a store “similar” to your transactional store. We will move all the entities which have completed their lifecycle to this store. Note that we will only keep the last X time frame data here. Also, the latency requirement is a little relaxed here as the rate of access is “relatively” less. Hint: Think MySQL like store, but horizontally scalable.

The article then takes on the journey and explains:

  • MySQL scaling
  • Phases of MySQL scaling
  • Challenges with sharded MySQL
  • Solution overview

At a high level, we need to come up with 3 different layers of data stores with the following characteristics:

  • Hot Store
  • Warm Store
  • Cold Store

Now, there are multiple DSQL databases available, we chose Pingcap’s TiDB. Along with the excellent core DSQL features and being MySQL compatible, TiDB also provides a plethora of tooling. This is something which came in very handy in our solution. Following are the tools provided by TiDB.

When it comes to data management and especially when it is a technology like MySQL, the archival policies usually people follow is a one time big bang activity of data purge and movement. Good read!

[Read More]

How to build a powerful e-learning platform using Scala and Redis

Categories

Tags cloud miscellaneous scala java machine-learning big-data nosql

Never before has online learning been so accessible. Whether you want to discover more about cryptocurrency, sharpen your programming skills or even just learn a new language, the digital age has gifted everyone access to a phenomenal amount of content. However, over time e-learning has been viewed as just another digital commodity, where users expect all online content to be instantaneous. Speed remains crucial to performance, where any lags or delays in page loading time kills the user’s experience. By Redis Growth Team.

Architecture model for e-learning platform

Source: https://redis.com/blog/how-to-build-a-powerful-e-learning-platform-using-scala-and-redis/

In this tutorial you’ll build a powerful e-learning platform that will connect students and teachers with one another along with a diverse library of online courses. With speed being the linchpin to performance, you’ll deploy a number of different Redis components to achieve this objective.

The data model is expressed through nodes and relations using RedisGraph. The model is very simple since it involves the Student, Course and Topic entities expressing the different kinds of relations between each other. Being this far into the digital age, a simple prerequisite of any application is for it to operate at maximum speed. This is especially true for e-learning platforms where users are meant to be engaged with its course content for long periods of time. Good read!

[Read More]

Learn more about distributed databases with ShardingSphere

Categories

Tags database distributed nosql cio

Apache ShardingSphere is an open source distributed database, plus an ecosystem users and developers need for their database to provide a customized and cloud-native experience. By Trista Pan.

Database Plus sets out to build a standard layer and an ecosystem layer above the fragmented database’s basic services. A unified and standardized database usage specification provides for upper-level applications, and the challenges faced by businesses due to underlying databases fragmentation get minimized as much as possible. To link databases and applications, it uses traffic and data rendering and parsing. It provides users with enhanced core features, such as a distributed database, data security, database gateway, and stress testing.

Apache ShardingSphere is an open source distributed database, plus an ecosystem users and developers need for their database to provide a customized and cloud-native experience. In the three years since it joined the Apache Foundation, the ShardingSphere core team has worked hard with the community to create an open source, robust, and distributed database and a supporting ecosystem.

The content of the article is split into:

  • Database Plus
  • Standardized cluster management with DistSQL
  • Multi-access terminal
  • Distributed governance
  • Monitoring with Grafana

The community is continuing to optimize ShardingSphere and to integrate new ideas and industry scenarios. The community built it, and one of the main driving forces of development is user feedback. Good read!

[Read More]

3 ways to use load tests beyond performance

Categories

Tags tdd cloud miscellaneous performance agile web-development

Most teams use load testing only for performance or stress tests. But they can also help uncover infrastructure issues early. Read on to see how load tests can help make your entire system more resilient at its foundation. By Dennis Martinez.

When discussing load testing, two immediate thoughts come to mind for most developers and testers: validating application performance and putting systems under immense pressure. Testing for both of these use cases is vital for any modern software development workflow.

The content of the article focuses on the following 3 areas:

  • Use load testing to ensure your infrastructure configuration works as expected
  • Use load testing to test the resiliency of your hardware
  • Use load testing to verify your serverless applications

Load testing can also become a powerful tool for developers, testers, and anyone responsible for your application’s resiliency. Applications are becoming highly complex with more moving parts, and we need to do our best to deliver a high-quality solution from top to bottom. Teams usually employ load testing for obtaining performance metrics or stress-test an application’s capacity, but as this article discusses, it can assist with a lot more. Good read!

[Read More]

How does an SQL injection attack work? Examples & types

Categories

Tags servers sql database miscellaneous cloud cio distributed

A SQL injection (SQLi) attack is one of the most threatening issues for data integrity and confidentiality today, allowing attackers to access secure data where they are not authorized. In this article, we discuss SQLi and how these attacks work, with types and examples. By Al Mahmud Al Mamun.

SQL injection or insertion is a malicious attack technique that exploits vulnerabilities of SQL-based applications. With SQLi, hackers inject arbitrary code into SQL queries, which allows them to directly add, modify, and delete records stored in a database. SQLi attacks can affect any web application or website involved with a SQL database, such as MySQL, SQL Server, Oracle, and others.

The article describes:

  • What is a SQL injection?
  • How does a SQL injection attack work?
  • Examples of SQL injection attacks
  • Types of SQL injections

Every organization needs to focus on protecting its valuable information from SQLi attacks. There are many automatic detection tools available to test for these vulnerabilities. A layered approach that includes data-centric strategies can be the optimal defense for SQLi attacks, where data focuses on protecting itself, as well as the applications and network. Good read!

[Read More]

How Uber migrated financial data from DynamoDB to Docstore

Categories

Tags database cloud software-architecture distributed

Each day, Uber moves millions of people around the world and delivers tens of millions of food and grocery orders. This generates a large number of financial transactions that need to be stored with provable completeness, consistency, and compliance. By Piyush Patel, Jaydeepkumar Chovatia, and Kaushik Devarajaiah.

LedgerStore is an immutable, ledger-style database storing business transactions. LedgerStore provides signing/sealing of data to guarantee data completeness/correctness, strongly consistent indexes, and automatic data tiering. LedgerStore uses DynamoDB as its storage backend. Running LedgerStore in production for almost 2 years at Uber scale, we’d amassed a large amount of data as trips and orders volume grew. Over this period of time we realized that operating LedgerStore with DynamoDB as a backend was becoming expensive. Also having different databases in our portfolio creates fragmentation and makes it difficult to operate.

In this post today we are going to talk about rearchitecting some of the core components of LedgerStore on top of Docstore, Uber’s general-purpose multi-model database:

  • What is LedgerStore?
  • Data model
  • Data integrity
  • LedgerStore 2.0 design considerations
  • Architecture
    • Docstore table design
    • Data sealing
    • Data backfill (historical data from DynamoDB—more than 250 billion unique records (~300TB of data)
  • DynamoDB to Docstore migration

… and more. Authors have also taken a deep dive into the architecture and explained how the entire migration was designed and executed without impacting stringent SLAs and online flow. We liked this one: We backfilled 250 billion unique records and not a single data inconsistency has been detected so far, with the new architecture in production for over 6 months. Super interesting read!

[Read More]

Introducing quantum serverless

Categories

Tags serverless cloud miscellaneous cio distributed

Introducing Quantum Serverless, a new programming model for leveraging quantum and classical resources. By Blake Johnson, Ismael Faro, Michael Behrendt, Jay Gambetta @ibm.com.

Integrating quantum into real-world workflows will take advancements across the stack. We need to think holistically about quantum performance, including the scale, quality, and speed of our processors.

We need to enter the realm of quantum advantage, where quantum computers are either cheaper, faster, or more accurate than classical computers at the same relevant task.

A recent example1 we demonstrated is entanglement forging which exploits symmetry in chemistry problems to simplify the knitting. Meanwhile, quantum embedding re-frames the problem to allow classical computers to simulate those pieces that can be well-approximated classically, while looping in quantum resources for only the classically difficult parts of the problem. In the context of chemistry, this might describe an active-space calculation that runs on the QPU with a Hamiltonian iteratively updated by a classical simulation of the inactive space.

Finally, error mitigation uses classical post-processing in order to reduce the impact of some classes of errors and get a more-accurate quantum solution. We hope that Quantum + Classical will allow us to realize quantum advantage in certain applications sooner than expected. For the details please follow the link to the full article. Good read!

[Read More]

How much has Quantum Computing actually advanced?

Categories

Tags machine-learning big-data cloud servers distributed

Lately, it seems as though the path to quantum computing has more milestones than there are miles. Judging by headlines, each week holds another big announcement—an advance in qubit size, or another record-breaking investment. This is Q&A with the former chief architect of Google’s Sycamore, John MartinisBy Dan Garisto.

But if you go back to one of the points of the quantum supremacy experiment—and something I’ve been talking about for a few years now—one of the key requirements is gate errors. I think gate errors are way more important than the number of qubits at this time. It’s nice to show that you can make a lot of qubits, but if you don’t make them well enough, it’s less clear what the advance is. In the long run, if you want to do a complex quantum computation, say with error correction, you need way below 1% gate errors.

I want to drill down on “scale versus quality,” because I think it’s sort of easy for people to understand that 127 qubits is more qubits.

It depends how you want to quantify it, but it’s not a huge factor. It could be a bit better if you had more qubits, but you would maybe have to architect it in a different way. I don’t think it is good for the field to oversell results making people think that you’re almost there. It’s progress, and that’s great, but there still is a long way to go.

Very interesting read! Seems, like we are still some time away from good quantum system.

[Read More]

Creating ML model with Swift & CreateML

Categories

Tags machine-learning big-data how-to swiftlang app-development

We know machine learning is so popular in mobile and desktop applications. Therefore we need basic ML skills to follow this trend. By Oguz Kayra.

CreateML is a programming framework, to create apps with ML in IOS, macOS, and IpadOS (Apple Ecosystem). CreateML is facilitating using ML on projects for non-data scientists. ML power can spread many apps.

The article then does a good job explaining:

  • Machine learning workflow
  • Step by step: Create ML models with CreateML
    • Determine your dataset
    • Import CreateML and foundation modules
    • Create data table with CSV table
    • Separating relevant data from DataTable
    • Dividing DataTable as test and training
    • Creating model and train the model
    • Test your regression
    • Saving machine learning model

Basically, machine learning workflow is simple, get data, train models with these data, test your model, update your model, use this model in a real application.

You can find code and dataset on author’s GitHub account in this Github repository. Good read!

[Read More]