Welcome to curated list of handpicked free online resources related to IT, cloud, Big Data, programming languages, Devops. Fresh news and community maintained list of links updated daily. Like what you see? [ Join our newsletter ]

Trapped on technology's trailing edge

Categories

Tags miscellaneous servers cio management performance

We’re paying too much to deal with obsolete electronic parts. Keeping aging systems on their feet is a daunting and resource-intensive task. By Peter Sandborn.

The electronics, in essence, were fine—they just couldn’t easily be fixed if even the slightest thing went wrong. Although mundane in its simplicity, the inevitable depletion of crucial components as systems age has sweeping, potentially life-threatening consequences. At the very least, the quest for an obsolete part can escalate into an unexpected, budget-busting expense.

Call it the dark side of Moore’s Law: poor planning causes companies to spend progressively more to deal with aging systems. The crux is that semiconductor manufacturers mainly answer the needs of the consumer electronics industry, whose products are rarely supported for more than four years.

The defining characteristic of an obsolete system is that its design must be changed or updated merely to keep the system in use. Qinetiq Technology Extension Corp., in Norco, Calif., a company that provides obsolescence-related resources, estimates that approximately 3 percent of the global pool of electronic components becomes obsolete each month.

The systems hit hardest by obsolescence are the ones that must perform nearly flawlessly. Technologies for mass transit, medicine, the military, air-traffic control, and power-grid management, to name a few, require long design and testing cycles, so they cannot go into operation soon after they are conceived.

The absence of crucial parts now fuels a multibillion-dollar industry of obsolescence forecasting, reverse-engineering outfits, foundries, and unfortunately, a thriving market of counterfeits. Without advance planning, only the most expensive or risky options for dealing with obsolescence tend to remain open.

This is long but excellent read with many helpful examples and insights!

[Read More]

PyTorch – How to apply Backpropagation with Vectors and Tensors

Categories

Tags python data-science big-data learning iot miscellaneous performance

In Machine learning, a backpropagation algorithm is used to compute the loss for a particular model. The most common starting point is to use the techniques of single-variable calculus and understand how backpropagation works. However, the real challenge is when the inputs are not scalars but of matrices or tensors. By Strahinja Stefanovic.

In this post, we will learn how to deal with inputs like vectors, matrices, and tensors of higher ranks. We will understand how backpropagation with vectors and tensors is performed in computational graphs using single-variable as well as multi-variable derivatives. Further in this tutorial you will find:

  • Vector Derivatives
  • Backpropagation with Vectors
  • Backpropagation with Tensors
  • Backpropagation with Vectors and Tensors in Python using PyTorch

This is very detailed article with charts, algorithms, and PyTorch code examples explaining important concepts. Ideal for any data scientists!

[Read More]

What is fog computing and how does it work?

Categories

Tags data-science big-data robotics iot

The concept of fog computing was developed to combat the latency issues that affect a centralized cloud computing system. The boom of consumer and commercial IoT devices and technologies has put a strain on cloud computing resources. By Jose Gomez.

Fog computing can optimize data analytics by storing information closer to the data source for real-time analysis. Data can still be sent to the cloud for long-term storage and analysis that doesn’t require immediate action. Let’s get a better understanding of the underlying principles behind fog computing and see the ways it can benefit large, dispersed networks.

The article main content:

  • How does it work?
  • The benefits of fog computing
  • The disadvantages of fog computing
  • What industries rely on fog computing?

With the sheer amount of data being collected by IoT devices, many organizations can no longer afford to ignore the capabilities of fog computing, but it is also not wise to turn your back on cloud computing either

In reality, any device with computing, storage, and network connectivity can act as a fog node. When data is collected by IoT devices and edge computing resources, it is sent to the local fog node instead of the cloud. Utilizing fog nodes closer to the data source has the advantage of faster data processing when compared to sending requests back to the data center for analysis and action. In a large, distributed network, fog nodes would be placed in several key areas so that crucial information can be accessed and analyzed locally. Good read!

[Read More]

How can we fix the data science talent shortage?

Categories

Tags data-science career miscellaneous cio big-data

Data science might just be the most buzzed-about job in tech right now, but its pop culture sheen conceals some of the harsh realities of being a fresh graduate in the industry. By Kindra Cooper.

The job topped LinkedIn’s yearly Emerging Jobs Report from 2016 to 2019 consecutively (it is now at #3). But when Springboard data science alum Kristen Colley started hunting for her first data science job in 2019, most companies were not interested in her data science credentials. “When I started rebranding myself as a data analyst with the ability to handle machine learning problems, that’s when the opportunities started coming in,” she said.

The article then pays attention to:

  • A high barrier to entry
  • Why are data science roles in such high demand?
  • A talent shortage and a tight labor market
  • How is the industry responding?

“I think that’s where the industry’s headed: it’s not about having a million proficient data scientists that can come up with the entire ETA from model creation to implementation,” said Colley. “It’s more about having software engineers that understand enough to implement these autoML techniques.”

The talent shortage in data science isn’t a simple matter of not enough people training to become data scientists. In fact, there’s an “experience” gap that tends to be built into highly practical professions with a steep learning curve like software engineering, where education is a weak substitute for real-world experience. Very good!

[Read More]

Leverage enterprise-scale reference implementations for your cloud adoption

Categories

Tags software-architecture azure cio app-development web-development

This blog will discuss the IT team at Tailwind Traders and how they leveraged enterprise-scale reference implementations for the cloud environment they are building. By Thomas Maurer Senior Cloud Advocate, Azure and Sarah Lean, Senior Content Engineer, Azure.

Enterprise-scale landing zone architecture provides a strategic design path and target technical state for your Azure environment, including enterprise enrollment, identity, network topology, resource organization, governance, operations, business continuity, and disaster recovery (BCDR), as well as deployment options. These landing zones follow design principles across the critical design areas for an organization’s Azure environment and aligns with Azure platform roadmaps to ensure that new capabilities can be integrated.

The article then architecture considering several design areas:

  • Enterprise agreement (EA) enrolment and Azure Active Directory tenants
  • Identity and access management
  • Management group and subscription organization
  • Network topology and connectivity
  • Management and monitoring
  • Business continuity and disaster recovery
  • Security, governance, and compliance
  • Platform automation and DevOps

You will also get further resources for the study and charts explaining various architectures including the enterprise-scale foundation reference architecture, the enterprise-scale hub and spoke reference architecture, and enterprise-scale virtual wide-area network (WAN) reference implementation. Well done!

[Read More]

Worst nightmare cyberattack: The untold story of the SolarWinds hack

Categories

Tags infosec cio management software crypto servers

The routine software update may be one of the most familiar and least understood parts of our digital lives. By Dina Temple Raston.

The routine update, it turns out, is no longer so routine. Hackers believed to be directed by the Russian intelligence service, the SVR, used that routine software update to slip malicious code into Orion’s software and then used it as a vehicle for a massive cyberattack against America.

By design, the hack appeared to work only under very specific circumstances. Its victims had to download the tainted update and then actually deploy it. That was the first condition. The second was that their compromised networks needed to be connected to the Internet, so the hackers could communicate with their servers.

“The tradecraft was phenomenal.”

SolarWinds CEO and President figures the Russians successfully compromised about 100 companies and about a dozen government agencies. The companies included Microsoft, Intel and Cisco; the list of federal agencies so far includes the Treasury, Justice and Energy departments and the Pentagon.

“It’s really your worst nightmare,” Tim Brown, vice president of security at SolarWinds, said recently. “You feel a kind of horror. This had the potential to affect thousands of customers; this had the potential to do a great deal of harm.”

This is super interesting read for anybody in information security. Great read!

[Read More]

Use event-driven data mesh to avoid drowning in the (data) lake

Categories

Tags machine-learning big-data cloud cio miscellaneous management

For much of the last decade, enterprises fought against data silos, isolated persistence stores holding untold but inaccessible knowledge. Their primary weapon was the data lake: a huge centralized datastore that held terabytes of domain-specific data in a single logical location. By Jesse Menning.

Gartner’s conception of a data fabric relies heavily on the “backbone” of a knowledge graph. The knowledge graph describes the relationship between data sources throughout the entire fabric. Using this graph, machine learning and artificial intelligence determine the relationships between various sources of data and infer metadata automatically. The result is a catalog of data resources that can be used by consumers across the enterprise.

It turns out data lakes bring challenges of their own. The article then explains:

  • Data mesh vs. data fabric
  • What is a data fabric
  • What is a data mesh
  • Flavors of data products
  • How to get started with an event-driven data mesh

Event-driven architecture at the transactional layer accelerates customer interaction, giving businesses a leg up on their competition. One layer down, an event-driven data mesh can do the same for analytics, decreasing the time it takes to get answers to crucial questions using data from across domains. The first steps down the path are to choose an approach (data fabric vs. data mesh) and pick an event-driven infrastructure that can support the initiative. Good read!

[Read More]

Moving fast and breaking us all: Big tech's unaccountable algorithms

Categories

Tags machine-learning big-data cloud cio learning

They decide who passes and who fails in secondary school. They decide who gets arrested and who goes to prison. They decide what news you see first thing in the morning as well as what news you won’t see. And they drive the business models—and revenues—of the world’s largest and most powerful digital platforms. By Ellery Roberts Biddle & Jie Zhang.

…our findings suggest that much of the technology driving revenue for the world’s most powerful digital platforms is accountable to no one—not even the companies themselves.

The article content is divided into:

  • Why set human rights standards for algorithms?
  • Does the company disclose a commitment to human rights in its development and use of algorithmic systems?
  • How do they feed their algorithms?
  • How do the algorithms work?
  • Do companies measure the risks that their systems pose for human rights?
  • Ethics commitments are not going to solve these problems

None of the platforms published policies demonstrating an effort to integrate respect for human rights into their deployment of algorithms for their products and services, where it actually counts for users. Interesting read!

[Read More]

Cloud vendor lock-in: the good, the bad and reality

Categories

Tags software-architecture cloud cio learning miscellaneous

This is the second part of a mini-series centered around cloud computing; a high-level overview of vendor lock-in and mitigation strategies. By Piotr.

Vendor lock-in happens when a customer is dependent on a vendor’s products or services and is unable to switch to another vendor without incurring substantial costs and/or organizational changes. This generic definition applies also to cloud vendor lock-in where cloud vendor is any public cloud provider like Azure, AWS, GCP, Hetzner, Linode, etc.

The article content covers:

  • What is vendor lock-in?
  • The Good - why single vendor strategy is attractive
  • The Bad - putting all your eggs in one basket

Here are two most common pitfalls in avoiding cloud vendor lock-in:

  • Using common lowest denominator
  • Building your own integration layer

One of the most important benefits of using a public cloud provider are advanced services which can significantly improve developer productivity and lower complexity of IT governance.

In reality, things are very dynamic and every organization must be ready to react to the ever changing environment and requirements of the market they operate in. Good read!

[Read More]

How quantum computing will transform these 9 industries

Categories

Tags software-architecture cloud data-science machine-learning big-data

Quantum computing remains a nascent technology, but its potential is already being felt across many sectors. From healthcare to finance to artificial intelligence, we look at the industries poised to be reshaped by quantum computers. By cbinsights.com.

One area the company is looking at is quantum annealing for digital modeling and materials sciences. For instance, a decent quantum computer could quickly filter through countless variables to help determine the most efficient wing design for an airplane. Other companies, including Daimler and Samsung, are already using quantum computers to help research new materials for building better batteries.

In this article authors look at 9 spaces where quantum computing is already making waves:

  • Healthcare
  • Finance
  • Cybersecurity
  • Blockchain and cryptocurrencies
  • Artificial intelligence
  • Logistics
  • Manufacturing and industrial design
  • Agriculture
  • National security

Governments around the world are investing heavily in quantum computing research initiatives, partly in an attempt to bolster national security. Last year, the US government announced an almost $625M investment in quantum technology research institutes run by the Department of Energy — companies including Microsoft, IBM, and Lockheed Martin also contributed a combined $340M to the initiative. Similarly, China’s government has poured billions of dollars into numerous quantum technology projects and a team based in the country recently claimed to have achieved a quantum computing breakthrough. Good read!

[Read More]