Anuta ATOM

ATOM Deployment Guide

Table of Contents

Purpose of this document

This document is intended to be used for deploying ATOM software in a Kubernetes environment.

Intended Audience

The procedure for installing the ATOM software is meant for administration teams responsible for ATOM software deployment and operations.

ATOM deployment and operations requires hands-on experience installing Kubernetes clusters and deployment using Helm charts. This document assumes that you are familiar with Docker, containers, hypervisors, networking, and a good working knowledge of the operating systems.

Overview of ATOM Architecture

ATOM software is dockerized and runs on a Kubernetes cluster. In the current release, ATOM is provided as a self-contained installation with all the required components as illustrated below:

ATOM Deployment

ATOM deployment requires software components to be deployed in a Kubernetes environment. Software will be distributed through a central repository.

Deployment scenarios

ATOM can be deployed in any of the following environments:

  • On-Prem Kubernetes

  • Google Cloud Platform (GCP)

  • Amazon web service (AWS)

ATOM can be deployed with all the components at a single location or some of the components distributed.

  • Local Deployment

  • Distributed Deployment

Local Deployment

Local deployment has all the ATOM software components deployed in a single Kubernetes cluster.

Distributed Deployment

Distributed deployment allows ATOM software components to be distributed across multiple Kubernetes clusters in the same location or a different geographical location. Distributed Deployment is applicable in the following scenarios:

  • To deploy a Remote Agent – In Some customer scenarios network equipment distributed across different Locations. ATOM Agent can be deployed close to the Network equipment for Security or performance reasons.

  • Geo-redundant HA – ATOM Components can be deployed across multiple Locations/Sites within the same region to provide Fault Tolerance against an entire Site/Location going down. More details in ATOM Multi Availability Zone based HA.

Target Infrastructure

ATOM Can be deployed On Premises, Cloud or a combination as summarized in the Table below.

Environment

Description

Use case

Prerequisites

Cloud (Amazon, GCP or Similar)

Typically for Staging & Production Deployments

Development

Stage

Production

Hardware Requirements

On Premises

Typically for Staging & Production Deployments

Can be used for Multi user shared Development as well

Development

Stage

Production

Cloud + On Premise

ATOM Agent can be deployed on-Premises while rest of ATOM can be deployed in the Cloud

Development

Stage

Production

On-Prem VMware ESXi, KVM

For the Kubernetes cluster deployed on ESXi, KVM etc., make sure required Compute, Storage & Memory resources for VM nodes are allocated to have ATOM running on top of K8s cluster. Anuta can provide the OVA images for K8s Master and Worker nodes creation on ESXi, while the OVA’s can be converted to Qcow2 images to deploy K8s Master and Worker nodes on KVM etc.

Cloud (GCP / AWS)

As cloud deployments on GCP/AWS offer different variants of node-types, make sure the Node Type you selected matches the resources required for a Worker Node mentioned in Compute, Storage & Memory requirements(Separate Master Node not required in GCP/AWS).

For GCP deployment a e2-highmem-4 or custom-4-32768-ext Node type would be required and a r5.xlarge Node type for AWS deployment.

Requirements

Before deploying ATOM in the kubernetes cluster, ensure that the following requirements are satisfied:

Compute, Storage & Memory

Note:

SSD storage is mandatory as the ATOM’s databases and messaging services will perform better over SSDs

Basic Non-HA Setup

A basic Non-HA setup that doesn’t support resiliency requires a Kubernetes cluster (1 master and 3 worker nodes) based out of ESXi with recommendations listed below:

Component

Requirements Description

K8s Master – 1 node

Storage reserved in ESXi = 40 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 8GB

K8s Workers – 3 nodes

For each node storage reserved in ESXi = 300 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 32GB

Basic Resilient HA Setup

A basic HA setup supporting resiliency with regards to one node or pod failures requires a Kubernetes cluster (3 masters and 7 worker nodes) based out of ESXi with the following details:

Component

Requirements Description

K8s Master – 3 nodes

For each node storage reserved in ESXi = 40 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 8GB

K8s Workers – 7 nodes

For each node storage reserved in ESXi = 300 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 32GB

It is recommended to use RAID10 based storage and provision vms across multiple physical servers.

Multi-site Deployment (Remote ATOM agent)

For a Multi-site distributed deployment, where the ATOM agent is deployed remotely, a single ATOM agent (minimum) is deployed at each site in addition to the above setup choices. A Virtual Machine with below spec is required at each site location(s):

Component

Requirements Description

1 Virtual Machine

Storage reserved in ESXi = 40 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 8GB

ATOM Multi Availability Zone HA Deployment

ATOM supports deployment across multiple sites (aka Availability Zones) to support high availability in the event of a site failure provided these sites are connected over low latency links. This requires ATOM Components to be deployed across multiple sites or Availability Zones (AZs). Availability Zones are available when workloads are provisioned in a Cloud Service Provider. In this scenario, Kubernetes Cluster extends to multiple sites/Zones.

References:

Caveats:

  • Full Fault Tolerance against one Site failure requires ATOM deployment across 3 Locations/Sites.

  • In Case only 2 Sites/Locations are available:

    • Full Fault Tolerance against one Site failure is supported only from Release-8.8

  • Multi Availability Zones across Regions is yet to be certified in ATOM

  • Some ATOM Components that support deployment across multiple Availability Zones or sites are sensitive to Latency. In such scenarios, there will be an impact on application performance or throughput

3 Sites Deployment:

For Each Site:

Component

Requirements Description

K8s Master – 1 nodes

For each node storage reserved in ESXi = 40 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 8GB

K8s Workers – 3 nodes

For each node storage reserved in ESXi = 300 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 32GB

2 Sites Deployment:

Site-1:

Component

Requirements Description

K8s Master – 2 nodes

For each node storage reserved in ESXi = 40 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 8GB

K8s Workers – 4 nodes

For each node storage reserved in ESXi = 300 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 32GB

Site-2:

Component

Requirements Description

K8s Master – 2 nodes

For each node storage reserved in ESXi = 40 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 8GB

K8s Workers – 4 nodes

For each node storage reserved in ESXi = 300 GB (SSD)

  • CPU – 4 vCPU

  • Memory – 32GB

AWS Availability Zones

Refer to section Deploying New K8s Cluster for ATOM deployment in AWS which uses the Availability Zones(AZ) during deployment.

On Premises Across Data Centers / Locations

For on premises deployment of a Multi Availability Zone Model across different sites, latency requirements have to be met.

Refer to section Deploying New Kubernetes Cluster for On Premises ATOM deployment which creates K8s cluster among Master and Worker Nodes across the ESXis/Locations/DataCenters having reachability.

Network Requirements

ATOM related Ports/Protocols

Each of the components of the ATOM application communicate with each other and external using the following ports and protocols.

Wherever applicable, Firewall rules need to be updated to allow the communication between external clients to ATOM or from ATOM software to network infrastructure or between ATOM software components.

End Points

Port

Communication protocol

Notes

Northbound communication [External clients, access to ATOM Portal, and other ATOM Mgmt Clients]

ATOM Server (End user Application)

30443

HTTPs access

This will be the ATOM GUI page served via HAproxy.

Single Sign-On

32443

HTTPs access

For Single Sign-On Login access for ATOM, Grafana, Kibana, Glowroot and Kafka-Manager.

Alert Manager

31090, 31093, 32090

HTTP access

Monitoring/Debugging Alerts

Minio

31311

HTTP access

ATOM FileServer/Minio Access

Inter-components communication [Applicable when ATOM agent and server components are deployed separately with possibly a firewall in between]

ATOM Server – ATOM Agent

7000 / Nodeport ( 30700 )

TCP/RSocket

Remote Agent communicates with Agent-proxy via agent-lb.

Southbound communication with network devices

ATOM Agent – Network Elements

23

Telnet to network devices (TCP)

Different ports are used for various usecases in ATOM. Make sure PING reachability is also there.

21

FTP to network device (TCP)

22

SSH to network devices (TCP)

161

SNMP to network devices (UDP)

162

SNMP Trap Listening (Server) from network devices (UDP)

69

TFTP to network devices (UDP)

514

SYSLOG Listening (Server) port from network devices (UDP)

830

NETCONF to network devices (TCP)

32222

SFTP from ATOM Server to devices if ATOM needs to act as Image Server (TCP)

Please ensure that public access is available on all the nodes. If public access cannot be provided across the nodes then we need to consider an offline mode of installation of Atom Software by hosting Registry within your network. Below are details of public domains which ATOM would access for pulling docker images and other binaries.

ATOM -> Required Public Access Details for Firewall if applicable.

Port/Protocol

Domain Name

443/https

registry-1.docker.io

443/https

quay.io

443/https

gcr.io

443/https

grafana.com

443/https

codeload.github.com

443/https

deb.debian.org

Kubernetes related Ports/Protocols

Below are Ports and Protocols which need to be allowed for Kubernetes cluster creation among VM nodes. These need to be allowed in Firewall if in between VMs there is a Firewall when VMs are spread across DCs etc..

Ports

Protocol

Notes

443

TCP

kubernetes API server(HA mode)

6443

TCP

kubernetes API server

2379-2380

TCP

etcd server client API

10250

TCP

Kubelet API

10251

TCP

Kube-scheduler

10252

TCP

Kube-controller-manager

10255

TCP

Kubelet

179

TCP

Calico CNI

9100

TCP

Prometheus

30000-32767

TCP

NodePort services

6783

TCP

Weaveport(deprecated)

Linstor related Ports/Protocols

ATOM uses linstor CSI driver as a storage provisioner. Below are the Ports and Protocols which need to be allowed for Kubernetes among VM nodes related to Linstor. If the kubernetes cluster spreads across multiple DCs, these ports and protocols need to be open on the DC firewalls as well.

Protocol: TCP, Ports: 3366-3367, 3370, 3376-3377, 7000-8000

IP Addressing Requirements

  • One IP for each of the VM nodes.

  • For HA Master Resilient setup, when 3 Masters is opted, reserve one extra IP(virtual IP) belonging to the same subnet as other 3 Masters.

  • Internally for communication between microservices we use 10.200.0.0/16 subnet. If this subnet conflicts with any of the lab device subnet IPs then it needs to be mentioned beforehand and it can be handled accordingly during ATOM deployment.

Kubernetes Cluster Requirements

ATOM Software needs to be installed on a dedicated kubernetes cluster and it can be deployed on the following Kubernetes Distributions:

Any Other variations of Kubernetes Distribution requires additional validation from Anuta and requires significant lead time depending on the distribution.

Anuta provides deployment artifacts such as OVA images for CentOS vms, scripts for creating the Kubernetes cluster and images required for deploying ATOM.

For creating a Kubernetes cluster, check if the following requirements are satisfied:

  • All the hardware requirements defined in the section, Hardware Requirements are met.

  • OVA (Centos server with pre-installed minimal packages) is already imported manually into vCenter template library. OVAs should have been obtained from Anuta

For bootstrapping the Kubernetes cluster, scripts obtained from Anuta when triggered will help in installations of below packages and subsequent ATOM deployment. Refer to the section, “Procedure for Deploying ATOM”.

  • Docker-ce v20.10.5

  • Kubectl v1.19.8

  • Helm v3.3

Environment Requirements

Anuta scripts requirements for Kubernetes cluster creation are as follows –

  • Static IP addresses with unique hostnames should be provided manually (recommended than DHCP).

  • ESXi/vCenter version to be 6.0+ and OVA [Centos server with pre-installed minimal packages (shared by Anuta)] is imported into any of vCenter hosts

Deployment scripts and files

To simplify the deployment of Kubernetes clusters in your environment, the required scripts and files are organized into folders and are provided by Anuta Networks (in a zipped format).

Name of the file/folder

Description

ATOM

ATOM’s deployment files

node_setup.py

Helper Script to bootstrap the nodes and install the atom software.

Checklist for managing Kubernetes cluster

If you have already Kubernetes cluster created, ensure that the following are in place before deploying the ATOM application.

ATOM Software Requirements

Before proceeding with the deployment of ATOM application, you must have the following software artifacts with you, obtained from Anuta Networks:

Deployment Images

All the images required for deploying the components of ATOM will be pulled from the repositories, created in Quay (https://quay.io/repository/).

The images have been tagged with a specific name, in the format given below:

quay.io/<organization>/<image name>:<tag>

Example: quay.io/release/atom-core:8.X.X.X.YYYYY

Deployment scripts and files

Deploying ATOM in the local setup involves deploying the components required to build the ATOM application using Helm charts. To simplify the deployment in your environment, the required scripts and files are organized into folders and are provided by Anuta (in a zipped format).

Name of the file/folder

Description

ATOM

ATOM’s deployment files

scripts

Check and install kube, docker, helm, python packages

The key folder ATOM, contains Helm charts, templates and the deployment scripts which will be used for ATOM deployment. It has Helm charts like below

  • databases — contains the deployment files of all databases – PolicyDB and kafka

  • atom — contains multiple charts of individual microservice

  • Infra — contains charts related to infra components such as web-proxy, logstash, glowroot etc.

  • external-services — optional services to access external services like databases, kafka etc.

  • grafana — contains the helm charts for Grafana monitoring tool

  • persistence — contains the yaml files for creating persistent volumes

  • tsdb-server and and tsdb-monitoring — contains the helm charts for tsdb

  • minio — contains helm charts for minio/object storage

  • sso — contains helm charts for sso objects

Each of the above folders contains the following:

  • README.md – Detailed readme information

  • chart.yaml – Contains the information about the chart

  • values.yaml – Default configuration values for this chart

  • templates – A directory of templates containing the template, which when combined with values provided in the run-time generate a valid Kubernetes manifest file.

Security Apps on VM nodes before ATOM install

Users can install any security agents or clients on the VM nodes to meet their internal security compliance policies. Example – Trend Micro. Users have to make sure that these agents or clients shall not interfere with kubernetes processes and applications so that they are not modified when the ATOM is in running state. For information on ports that are used by Kubernetes and ATOM applications, please refer to section Networking Requirements.

Procedure for Deploying ATOM

ATOM applications can be deployed on new Kubernetes with help of Deployment scripts and files provided by Anuta.

New Kubernetes cluster

As discussed in the section “Environment Requirements”, provide the static IP address for VMs, and follow the sequence of steps for bootstrapping the Kubernetes cluster.

  • Verify that you have imported the shared Anuta OVA templates into your VMware vCenter.

  • For the minimal setup, create 4 VMs (1 master and 3 worker nodes) using Centos OVA template provided above by Anuta. Similar approach can be done for Basic Resilient HA setup.

    • The specs for master node will be 4CPU/8GB RAM/40 SSD/1 NIC

    • Specs for each worker node will be 4CPU/32GB RAM/300GB SSD/1 NIC

    • Login credentials for these VMs will be atom/secret@123. For any python script executions use sudo for which password is again secret@123

NOTE: Do not login with root username into VMs

  • Run the node_setup.py which is present in the home directory using sudo privileges as shown below [Note:This script needs to be run on each node individually]:

  • Enter 1 (master) or 2(worker) depending on the type of node that you want to provision.

Choose among the following:

  • Bootstrap Script: This script will initially help you set up basic Network Connectivity, Hostname configuration and NTP settings.

  • Atom Installation: This script will be used to deploy k8s and bring up the atom software at a later stage. Complete steps 4-7 before invoking this.

  • Enter 1 to proceed with the bootstrap function and select the complete fresh setup by again choosing 1 as shown below:

  • Provide the following inputs as requested by the script:

  • Interface Details to be provisioned along with relevant CIDR info.

  • DNS Server Information

  • NTP Server Information

  • Hostname of the VM along with the hostname-ip to bind.

Refer the screenshot below:

Network Configuration Details

NTP Server Configuration Details

Hostname Configuration Details

Once the bootstrap is complete proceed with the next steps. [Note: Hostname changes would be reflected on reboot only. Select yes to reboot if you wish to change the hostname]

  • Make sure Internet access is there from all the nodes.

  • After completion of the bootstrap process we are now ready to begin the atom installation process.Select Atom Installation on the Master Node and proceed.

  • Since it is a fresh install we can enter choice 1 and begin the process.If k8s cluster is already set up in the customer lab he can proceed with Atom software download and installation by selecting appropriate choices.

  • To download the Atom Software provided by Anuta Networks Support team we can use any of the methods as seen below:

  • Wget: utility for non-interactive download of files from the Web.User needs to enter the link as input and the files would be downloaded and extracted automatically.

  • Manual: User can use any standard file transfer protocol to transfer the files on the home directory of the atom user.

  • After the installation files are copied on the Master Node we can begin with the k8s deployment.Depending on whether we want a minimal or resilient setup provide the inputs as shown below:

For Basic Non HA setup follow below

For Resilient HA setup follow below

  • Above script can create K8s cluster among Master and Worker Nodes spread across Esxi/Locations which have reachability.

  • Verify the node creation using the command “kubectl get nodes” and verify labels using the command “kubectl get nodes –show-labels

  • As the Kubernetes cluster’s bootstrapping is already done, it is ready for ATOM deployment. Follow the steps outlined in the section, “ATOM Deployment” to complete the ATOM deployment process.

ATOM Deployment

After ensuring that the prerequisites are taken care as described in the section, “Prerequisites for Deploying ATOM” and perform the following steps:

  • For Non-HA or Resilient HA setup, ensure that the worker nodes are labelled properly as below.

For Non-HA setup:

Worker 1: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb

Worker 2: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb

Worker 3: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb

For Resilient-HA setup:

Worker 1: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb,topology.kubernetes.io/zone=dc-1

Worker 2: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb,topology.kubernetes.io/zone=dc-1

Worker 3: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb,topology.kubernetes.io/zone=dc-1

Worker 4: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb,topology.kubernetes.io/zone=dc-2

Worker 5: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb,topology.kubernetes.io/zone=dc-2

Worker 6: elasticsearch,broker,zookeeper,object_store,default_agent,grafana,distributed_db,agent1,securestore,northbound,thanos,monitoring_server,infra-tsdb,topology.kubernetes.io/zone=dc-2

You can view the existing labels using kubectl get nodes –show-labels. To label any node with a specific label then use below command:

kubectl label node <node-name> <label_name>=deploy

Note: Make sure you label dc1, dc2 appropriately based on the datacenter where it is present. For more Worker Nodes also the labelling approach remains the same as above.

  • To download the Atom Software provided by Anuta Networks Support team we can use any of the methods as described in the section New Kubernetes cluster step number 10.

  • On the master node of the Kubernetes cluster, you can deploy the components of the application by selecting the Atom installation option.

Python script “deploy_atom.py” will execute and use the values.yaml file which has values set for various parameters and the images to be used for deployment.

NOTE: The order in which these components should be deployed is already defined in the above script.

OPTIONAL: If a different namespace (instead of atom namespace) needs to be used, then do changes in functional_minimal.yaml file:

usernamespace:

enabled: false

namespace: <mynamespace>

namespace: <mynamespace>

A successful ATOM deployment of the components using Helm will have sample output like below:

anuta docker registry secret was not found, creating it

helm check is successful

Folders creating done.

PV creating done.

All module check is successful

deploying Linstor charts

Helm chart haproxy got successfully deployed

Helm chart keycloak skipped

Helm chart infra-kibana got successfully deployed

Helm chart haproxy-gw got successfully deployed

Helm chart dashboard got successfully deployed

Helm chart oauth2 skipped

Helm chart lb got successfully deployed

Helm chart infra-grafana got successfully deployed

Helm chart infra-distributed-db-webconsole got successfully deployed

Helm chart infra-logstash got successfully deployed

Helm chart broker got successfully deployed

Helm chart zookeeper got successfully deployed

Helm chart infra-distributed-db-webagent got successfully deployed

Helm chart infra-log-forwarder got successfully deployed

Helm chart elasticsearch-config got successfully deployed

Helm chart schema-repo got successfully deployed

Helm chart infra-elasticsearch got successfully deployed

Helm chart infra-distributed-db got successfully deployed

returncode is 0

DB pods deployed

Helm chart infra-tsdb-monitoring got successfully deployed

Helm chart minio got successfully deployed

Helm chart thanos got successfully deployed

Helm chart atom-workflow-engine got successfully deployed

Helm chart atom-file-server got successfully deployed

Helm chart atom-inventory-mgr got successfully deployed

Helm chart atom-isim got successfully deployed

Helm chart kafka-operator got successfully deployed

Helm chart atom-pnp-server got successfully deployed

Helm chart atom-core got successfully deployed

Helm chart atom-qs got successfully deployed

Helm chart atom-agent-proxy got successfully deployed

Helm chart atom-scheduler got successfully deployed

Helm chart atom-sysmgr got successfully deployed

Helm chart atom-agent got successfully deployed

Helm chart atom-telemetry-engine got successfully deployed

Helm chart atom-frontend got successfully deployed

Helm chart infra-glowroot got successfully deployed

Helm chart burrow got successfully deployed

Helm chart es-curator got successfully deployed

Helm chart jaeger-tracing got successfully deployed

Helm chart kafka-control got successfully deployed

Helm chart kafka-manager got successfully deployed

Helm chart infra-web-proxy got successfully deployed

Helm chart infra-tsdb got successfully deployed

Helm chart modsecurity got successfully deployed

  • To get a summary of access URLs for various components deployed, you can execute following command in scripts folder:

# cd scripts

# sh get_urls.sh

The output will be similar to below

atom URL : https://172.16.22.207:30443/

atom-kpi-metrics URL : http://172.16.22.207:/

Error from server (NotFound): services “infra-tsdb-query” not found

service domain-metrics is not deployed

service kafka-manager is not deployed

SSO URLS for application endpoints are:

ATOM UI ==> https://172.16.20.241:32443

KEYCLOAK UI ==> https://172.16.20.241:32443/auth

KIBANA UI ==> https://172.16.20.241:32443/kibana

GRAFANA UI ==> https://172.16.20.241:32443/grafana

GLOWROOT UI ==> https://172.16.20.241:32443/glowroot

K8S UI ==> https://172.16.20.241:32443/k8s/

KAFKA MANAGER UI ==> invalid or not deployed

Fetching token for Kubernetes Dashboard login

eyJhbGciOiJSUzI1NiIsImtpZCI6IllkSlJIVHVfaHZsc3NndUM4NDRFMmdqU1FLT1VWekhOcXZOUGFNNDExZFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhLXRva2VuLXRjdGY3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiY2EzNTM0ZDMtYmE2YS00ZDAxLWE2YTItYmRiZmUyZTYwMDk0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2x1c3Rlci1hZG1pbi1kYXNoYm9hcmQtc2EifQ.yWSYaxanwGtd3HdQTVRzv8KBfi_EPxqs0yruKEM5rxaXZdW6fdQBhD1oNtxWoUNl2UUOLhB_m82QhWRwZlaXCqBElPjSe96QJlH68e-1mlghJXOGm2uiFE25lLcdxu0bwNN73Xw26h9D4Tz1EzQ6_oasBrTJP3J479ik8M7UlLmtktRor9xpFV7VJb6DDbDjhycJTnAkuFZD6Walmjj2NE_g1qkfAQstS2B6vojZvsPEby1tO61OCW7YCuIg_kaAhakR22nbUXg2OftNgCogBlSmzf8Gw3S1tEA89tuZt-mk_ugpaHayCdxwSXpUR-MPGGIVngyUX7IeY3rEMwqNYQ

Docker registry for Offline deployment

ATOM can be deployed offline using the locally hosted docker registry. Docker images have to be pulled from a locally available registry to the respective nodes for atom deployment.

The registry template “docker-registry-300-0421” is the OVA that can be used to deploy the Deploy the OVA through VCenter

  • The specs for docker registry VM will be 4CPU/32GB RAM/300GB SSD/1 NIC

  • Log into the VM using default creds atom/secret@123.

  • For bootstrapping the node with basic Interface, DNS and NTP configs run the node_setup.py which is present in the home directory using sudo privileges as described in the section New Kubernetes cluster

  • After completion of the bootstrap process we are now ready to begin the Docker registry installation process. Run node_setup.py script and select Docker registry installation by entering 2 when prompted for choice.

  • For a fresh install we can select “Complete Docker Registry Installation for offline Deployment” option by entering 1. If required we can perform each of the other steps in the exact order individually.In case of failure, the user can retry by giving appropriate options where the process had failed.

  • Provide the IP option using “1” or use hostname if they can be resolved. Give the project name which would serve the purpose of repo name.It needs to be provided at a later stage so do make note of it.

Default login for registry will be admin/admin (http:<registry-ip>)

Output of the above process may take time and would look as follows:

Redirecting to /bin/systemctl restart docker.service

harbor/harbor.v2.2.1.tar.gz

harbor/prepare

harbor/LICENSE

harbor/install.sh

harbor/common.sh

harbor/harbor.yml.tmpl

prepare base dir is set to /home/atom/harbor

Unable to find image ‘goharbor/prepare:v2.2.1’ locally

docker: Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 8.8.8.8:53: read udp 172.16.26.105:53734->8.8.8.8:53: i/o timeout.

See ‘docker run –help’.

[Step 0]: checking if docker is installed …

Note: docker version: 20.10.5

[Step 1]: checking docker-compose is installed …

Note: docker-compose version: 1.25.5

[Step 2]: loading Harbor images …

23e1126e5547: Loading layer [==================================================>] 34.51MB/34.51MB

0a791fa5d10a: Loading layer [==================================================>] 6.241MB/6.241MB

478208477097: Loading layer [==================================================>] 4.096kB/4.096kB

a31ccda4a655: Loading layer [==================================================>] 3.072kB/3.072kB

70f59ceb330c: Loading layer [==================================================>] 28.3MB/28.3MB

ef395db1a0f0: Loading layer [==================================================>] 11.38MB/11.38MB

fb2e075190ca: Loading layer [==================================================>] 40.5MB/40.5MB

Loaded image: goharbor/trivy-adapter-photon:v2.2.1

c3a4c23b7b9c: Loading layer [==================================================>] 8.075MB/8.075MB

00f54a3b0f73: Loading layer [==================================================>] 3.584kB/3.584kB

afc25040e33f: Loading layer [==================================================>] 2.56kB/2.56kB

edb7c59d9116: Loading layer [==================================================>] 61.03MB/61.03MB

e5405375a1be: Loading layer [==================================================>] 61.85MB/61.85MB

Loaded image: goharbor/harbor-jobservice:v2.2.1

ab7d4d8af822: Loading layer [==================================================&