Openshift —a way to containerize.


OpenShift is a family of containerization software products developed by Red Hat.

Its flagship product is the OpenShift Container Platform — an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.

The family’s other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.

The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides.

Topology view :

Architecture of OpenShift and its components:

Openshift Architecture :

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster.

The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes.

Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

✅Infrastructure layer

✅Service layer

✅Main node

✅Worker nodes


✅Persistent storage

✅Routing layer

🟥Infrastructure layer

In the infrastructure layer, we can host applications on physical servers, virtual servers, or even on the cloud (private/public).

🟥Service Layer

The service layer is responsible for defining pods and access policy. The service layer provides a permanent IP address and host name to the pods; connects applications together; and allows simple internal load balancing, distributing tasks across application components.

🟥Main node

The Main node is responsible for managing the cluster, and it takes care of the worker nodes. It is responsible for four main tasks:

  • API and authentication: Any administration request goes through the API; these requests are SSL-encrypted and authenticated to ensure the security of the cluster.
  • Data Store: Stores the state and information related to environment and application.
  • Scheduler: Determines pod placements while considering current memory, CPU, and other environment utilization.
  • Health/scaling: Monitors the health of pods and scales them based on CPU utilization. If a pod fails, the main node restarts it automatically. If it fails too often, it is marked as a bad pod and is not restarted for a temporary time.

🟥Worker Nodes

The worker node is made of pods. A pod is the smallest unit that can be defined, deployed, and managed, and it can contain one or more containers. These containers include your applications and their dependencies. For example, Tom stores the code for his e-commerce platform in containers for each of the databases, front-end, user system, search engine, and so on.


The registry saves your images locally in the cluster. When a new image is pushed to the registry, it notifies OpenShift and passes image information.

🟥Persistent Storage

This is where all of your data is saved and connected to containers. It is important to have persistent storage because containers are ephemeral, which means when they are restarted or deleted, any saved data is lost. Therefore, persistent storage prevents any loss of data and allows the use of stateful applications.

🟥Routing Layer

The last component is the routing layer. It provides external access to the applications in the cluster from any device. It also provides load balancing and auto-routing around unhealthy pods.

Openshift Vs Kubernetes

🔴Red Hat OpenShift is an enterprise open source container orchestration platform. It’s a software product that includes components of the Kubernetes container management project, but adds productivity and security features that are important to large-scale companies.

🔴Kubernetes is an open source container orchestration project. It helps users manage clustered groups of hosts running Linux containers, which are sets of processes that contain everything needed to run in isolation.


Kubernetes offers more flexibility as an open-source framework and can be installed on almost any platform — like Microsoft Azure and AWS — as well as any Linux distribution, including Ubuntu and Debian. OpenShift, on the other hand, requires Red Hat’s proprietary Red Hat Enterprise Linux Atomic Host (RHELAH), Fedora, or CentOS. This narrows options for many businesses, especially if they’re not already using these platforms.


OpenShift has stricter security policies. For instance, it is forbidden to run a container as root. It also offers a secure-by-default option to enhance security. Kubernetes doesn’t come with built-in authentication or authorization capabilities, so developers must create bearer tokens and other authentication procedures manually.


Kubernetes has a large active community of developers who continuously collaborate on refining the platform. It also offers support for multiple frameworks and languages. OpenShift has a much smaller support community that is limited primarily to Red Hat developers.


Kubernetes lacks a networking solution but lets users employ third-party network plug-ins. OpenShift, on the other hand, has its out-of-the-box networking solution called Open vSwitch, which comes with three native plug-ins.


Kubernetes offers Helm templates that are easy to use and provide a generous amount of flexibility. OpenShift templates are nowhere near as flexible or user-friendly.

Container Image Management

OpenShift lets developers use Image Streams to manage container images, while Kubernetes doesn’t offer container image management features.

Customer Stories :

Ford Motor Company is a global company based in Dearborn, Michigan. The company designs, manufactures, markets and services a full line of Ford cars, trucks, SUVs, electrified vehicles and Lincoln luxury vehicles, provides financial services through Ford Motor Credit Company and is pursuing leadership positions in electrification; mobility solutions, including self-driving services; and connected services. Ford employs approximately 190,000 people worldwide.

ord sought to use Kubernetes container technology, application programming interfaces (APIs), and automation within its datacenters to give its legacy stateful applications the benefits of public cloud: faster delivery, easier maintenance, and automated scalability. Consolidating its hardware and software environments with container orchestration would also help the company use its resources more effectively.

”Containers are an extremely portable way to deliver an application, because you can build in all the dependencies and libraries that allow anyone to run that container and get the same performance in any environment,” said Presnell. “But we wanted to focus on the value we could deliver, not maintaining the container platform. We needed container orchestration that would provide not only application delivery, but also service capabilities to maintain that environment.”

After running tests and proofs of concept (POCs) of container technology, Ford began looking for an enterprise partner offering commercially supported open source solutions to help run containers in production and support innovative experimentation.

“We have several open source technologies in our IT environment and products. We want to move toward being able to use and contribute to open source more — to help somebody else in the community take what we’ve done and improve on it,” said Presnell. “But we needed a container platform that had an enterprise offering, one that was well-known in the industry and was well-engineered.”

Past experience with Kubernetes led Ford to adopt CoreOS Tectonic. When CoreOS was acquired by Red Hat, Ford migrated to Red Hat OpenShift Container Platform, a solution that enhanced the strengths of CoreOS’s offering with new automation and security capabilities. Based on Red Hat Enterprise Linux®, OpenShift Container Platform offers a scalable, centralized Kubernetes application platform to help teams quickly and more reliably develop, deploy, and manage container applications across cloud infrastructure.

The company also implemented Red Hat Quay to create a centralized container registry to host and secure all of its container images while offering protected, API-based access to partners and other third parties.

“Red Hat is one of the top engineering-focused Linux companies in the world and produces one of the most significant Linux distributions,” said Presnell. “They are the second biggest contributor to the Kubernetes community. Red Hat is really focused on providing enterprise-quality service alongside engineering excellence.”

Ford has also adopted several open source technologies that Red Hat contributes to, from Open Data Hub — a data and artificial intelligence (AI) platform for hybrid cloud — to Dex, an OpenID-based identity authentication service.

During migration, Ford worked closely with Red Hat Consulting to create an environment that supports more than 100 back-end and dealer-facing stateful applications, including databases and messaging systems, inventory systems, and API managers. After launching OpenShift in production, Ford also adopted Sysdig Secure and Sysdig Monitor, a Kubernetes security solution certified by Red Hat, to add extra visibility and protection for its development and production OpenShift environments.

For its success using OpenShift for modern automotive development and using digital technology to serve customers, Ford was recognized with a 2020 Red Hat Innovation Award.

Shifting to a container-based approach requires less initial hardware investment — and ongoing savings as Ford continues to modernize and migrate its legacy applications. The company has improved the efficiency of its hardware footprint by running OpenShift on bare metal and using its existing hardware more effectively.

Ford is already experiencing significant growth in demand for its OpenShift-based applications and services. It aims to achieve migration of most of its on-premise, legacy deployments within the next few years.

The company is also looking for ways to use its container platform environment to address opportunities like big data, mobility, machine learning, and AI to continue delivering high-quality, timely services to its customers worldwide.

“Kubernetes and OpenShift have really forced us to think differently about our problems, because we can’t solve new business challenges with traditional approaches. Innovation and constantly exploring and questioning are the only way we can move forward,” said Puranam. “It’s a journey, but one that we have a good start on. Thanks to having the right set of partners, with both Red Hat and Sysdig, we’re well-situated for future success.”

Happy learning ………🤗

Keep sharing………..

Technological Enthusiast , Like to express what is need of time, Relates real world to philosophical insights