...
Search
Close this search box.

Blogs

Cross Domain Automation: Effortless Integration for Data Center and Campus LAN

The two different network domains

Networks are categorized into distinct domains based on their intended roles and characteristics. Data centers are geared towards high-performance, redundancy, and security for data-heavy operations, necessitating powerful servers and robust network resilience. Conversely, campus core networks prioritize interconnecting various segments of an organization, facilitating smooth communication. Considering traffic flow patterns like north-south and east-west, the architectural designs of data center and campus networks often differ, tailored to accommodate specific traffic requirements and optimize network performance in each domain.

What is common?

Macro-segmentation is a pivotal strategy that regulates and restricts network traffic flow within distinct network segments. Macro-segmentation aims to break up a network into multiple discrete chunks to support business needs. In both data center and campus networks, segmentation is based on the origination and destination of specific traffic, enabling stringent control over data movement. For instance, in the banking sector, a security policy might restrict branch employees from accessing the financial reporting system, effectively safeguarding sensitive information. Contemporary networks have transitioned from conventional segmentation techniques like ACL and VLAN to adopt more refined macro-segmentation policies that offer enhanced granularity and flexibility to cater to an organization’s precise requirements or specialized business applications. Macro-segmentation empowers organizations to control access between different segments precisely, allowing or denying interaction as per predefined policies.

Network with and without Macro-Segmentation

Advantages of Macro-Segmentation

  1. Improved monitoring: Macro-segmentation extends network control and visibility beyond the perimeter firewall, providing a deeper understanding of data flows within an organization’s network. It helps identify the traffic intended for specific segments, increasing visibility and enhancing monitoring. 
  2. Network security: Users can set strict internal boundaries and isolate the traffic, decreasing the attack surface for malicious attackers. It is easier for security teams to enable surveillance of the traffic for different sections of the network.
  3. Regulatory compliance: Macro-segmentation is instrumental in attaining regulatory compliance, empowering network managers to fortify critical data resources and grant access exclusively to authorized users, ensuring a robust security framework.
  4. Secure remote working for campus users: Likewise, remote devices follow a similar pattern. Corporations can partition networks to establish remote work clusters, facilitating employee access to crucial resources while also providing a shield against potential vulnerabilities in device security
  5. Optimized performance: The seamless traffic flow directs it precisely where required, diminishing network latency.

Driving Efficiency through Integration

There are business needs where campus users interact with data center servers. Increasingly, companies need to keep their data connected to campuses to run critical processes that have to be connected to data centers. Therefore, both the dissimilar network domains need to be connected while maintaining the macro-segmentation in both network domains. The segments must be mapped appropriately so the right campus network user can access the data center server, which is homed with a specific segment. Achieving effective connectivity mandates a meticulous mapping of network segments, guaranteeing authorized users within the campus network are granted precise access to designated segments within the data center. This fosters secure and efficient data exchange while upholding data integrity, privacy, and efficiency. This integration facilitates efficient collaboration between campus-based operations and data center resources. 

Campus and Data Center Network Integration

The Complexities & The Implementation

Modern campus LAN fabrics incorporate VXLAN-LISP data and control protocols using VN (Virtual Networks) for macro-segmentation. In contrast, data center fabrics employ VXLAN-EVPN data and control protocols with VRF for network segmentation. Given the distinct protocols, configurations, and device variations, seamless connectivity demands meticulous segment mapping for controlled access.

In the above illustrations, we can see Server1 (Red) and Server2 (Green) are in different macrosegments/VRF. Similarly, PC1 and PC2 are in different macrosegments/VN. By integrating Datacenter networks into Campus LAN we are leaking the routes between the corresponding VRF and VN.

The day 0 configuration involves provisioning:

  1. IP connectivity and underlay routing protocol between the campus nodes.
  2. Assigning roles to campus nodes like border, control, and edge nodes.
  3. Provisioning of overlay fabric.
  4. Creation of endpoint/user connection from edge nodes.
  5. Provisioning overlay VXLAN-LISP fabric.
  6. Creating an L3 transit network towards the data center from the border node. 
  7. The intermediate fusion IP networks can be provisioned to relay and reciprocate the connection between the data center and campus. 
  8. IP connectivity and underlay routing protocol between the data center switches.
  9. Assigning the roles to the switches like spines and leaves.
  10. Creating a connection to the servers hosted in the data center.
  11. Extension of the VRF used by the VN in campus fabric over the fusion network. (Note: The extension of the VRF used by campus VN to the data center is known as shadow VRF.)
  12. Provision the shadow VRF on the data center border leaf.
  13. Mapping the datacenter server VRF to the shadow VRF, enabling route exchange between them. (Note: The mappings are predefined or given as input by the network administrator.)

 

The day N configuration involves the provisioning and brownfield discovery of:

  1. Discovering devices and their roles and the provisioned fabric.
  2. Discover the provisioned VN and associated VRFs with the VNs on the campus.
  3. Discovering devices and their roles and fabric from data centers.
  4. The provisioned VRF and VNIs from the data center fabric.
  5. Then, the campus VN and data center VRF can be taken as the network administrator’s input.
  6. Provision of the shadow VRF extension from the campus through the fusion network to the border leaf of the data center.
  7. Then, the connections between the shadow VRF and the data center VRF are mapped and merged.

Note: The server VRF that needs to be mapped to the campus user VN can be taken as input for mapping.

Day N network operations criticalities:

  1. The user needs to capture the configuration of the device’s previous version and ensure they have the golden configuration for recovery purposes
  2. The smooth operation of the provisioned circuit over the network should be established. Any degradation should be notified, and triage action needs to be performed using real-time monitoring and service assurance.
  3. The SLA and impact analysis should be performed in case of degradation.

The network domains comprise different network protocols, vendors, and OS. Also, the configurations are extensive and complex. Deploying the data center and campus networks is a time-consuming and complex procedure. This can adversely affect the SLA of service delivery. Also, the change requests to map the data center to campus macro segments are tedious and time-consuming. While verification and validation of the planned config needs to be tried, pre-checks, like the availability of resources and already used configuration parameters, need to be performed. The post-verification of the services and expected performance validation needs to be carried out. 

ATOM's Triumph Over the Challenge

ATOM streamlines device onboarding by utilizing Netconf/SSH and SNMP, demonstrating its capacity to seamlessly interact with various network devices across multiple vendors and operating systems. By leveraging its meticulously defined YANG models for campus and data center topologies, ATOM orchestrates a systematic configuration process, initializing fundamental IP connectivity based on physical and virtual connections. Subsequently, underlay routing protocols are provisioned, and essential features within the network elements are enabled to establish overlay fabric in both domains. To this end, VXLAN-LISP is deployed for campus fabric, while VXLAN-EVPN serves as the framework for data centers. ATOM adeptly identifies pre-existing configurations when a brownfield network is encountered, ensuring a comprehensive understanding of the network’s current state. Managing endpoint connectivity within the fabric, ATOM extends Virtual Networks (VNs) to the campus connection and Virtual Routing and Forwarding (VRF) to the data center using the shadow VRF, guaranteeing a seamless integration process.

Moreover, when a fusion network linking the data center and campus network is established, ATOM facilitates the integration by interconnecting the two networks through the border node in the campus and the border leaf switch in the data center. Employing BGP connectivity, ATOM forges a robust underlay connection, facilitating the extension of VRF used in campus VN over the fusion network to the data center leaf, thus establishing a fluid data path. In the case of a brownfield network environment, ATOM conducts a comprehensive analysis of VRF from both the data center and campus, offering network administrators the flexibility to map preferred campus VN/VRF to the data center VRF based on their specific preferences. Leveraging its integration with messaging platforms and email, ATOM ensures seamless communication, keeping all relevant stakeholders informed about the ongoing provisioning and integration processes, thereby facilitating efficient collaboration and transparency across the entire network infrastructure.

When a campus environment has to communicate to multiple data centers or a data center is connected to multiple campuses, that combination can be done using the ATOM campus to DC mapping feature. 

L2 topological view

Discovery of Connections and Devices

User Inputs to build the services

Resource allocation from ATOM

Configuration payload push to the devices

Verification and validation

ATOM ensures smooth configuration deployment by conducting thorough pre-checks of the availability of interfaces to be configured and to avoid conflict or overlapping. The post-checks involve service verification after provisioning. ATOM has probes to measure the HTTP, TCP/UDP performance, and ICMP for the provisioned network path. This reduces the service delivery time and helps organizations cut down their activation time.

Error handling

Whenever the service provisioning workflow fails at any stage, it returns to its initial state. 

Configuration management: ATOM facilitates seamless config rollback and versioning, allowing quick reversion to previous configurations if needed. 

Monitoring and alerting

 ATOM employs SNMP-based monitoring and real-time telemetry alongside an alert correlation feature for swift issue identification, and the remedial action can be defined based on alert type using ATOM’s closed loop automation feature.

Conclusion

By serving as a centralized platform for both data center and campus networks, ATOM significantly streamlines network management, offering a unified perspective and control over the entirety of the network infrastructure. Acting as the single source of truth, it provides a comprehensive and cohesive view of the network’s configuration, ensuring that administrators can efficiently monitor, manage, and optimize the performance of the entire network ecosystem from a single interface. This unified approach not only simplifies the complexities associated with network management but also fosters enhanced operational efficiency, enabling administrators to make informed decisions and implement targeted strategies to bolster network performance and resilience across both the data center and campus environments.

Additional Contributors: Manisha Dhan

About Author

You will also like...