Where the Packets Roam

Network Layers
Network Layers

Where the Packets Roam

Where the Packets Roam

Discover key strategies for securing industrial networks, managing IT/OT collaboration, and implementing resilient architectures in critical environments.

📖 Estimated Reading Time: 5 minutes

Article

Where the Packets Roam: Navigating Secure Network Architectures in Industrial Environments

Over the last four decades, the evolution of industrial networks—from serial MODBUS and PROFINET cabling to today’s Ethernet-enabled, cloud-connected plants—has forced IT, OT, and security professionals to rethink fundamentals. As digital transformation meets the relentless drive for uptime, the border between information technology (IT) and operational technology (OT) infrastructure has become both permeable and fiercely contested ground.


This article digs deep into the technical realities CISOs, IT Directors, network engineers, and control system operators face, presenting grounded strategies for secure, resilient, and monitored connectivity in mission-critical environments.


An Historical Perspective: From Air-Gap Illusion to Real Threats

The myth of the “air-gapped” industrial network—once synonymous with security—has long since been debunked. Early programmable logic controllers (PLCs) and distributed control systems (DCS) communicated via RS-232, RS-485, MODBUS or proprietary fieldbuses, with absolutely no expectation of external access. The arrival of Ethernet in the late 1980s (with the eventual standardization of IEEE 802.3) and the convergence of IT and OT in the early 2000s fundamentally shifted the attack surface.


  • Early days: Proprietary bus systems (Profibus, DeviceNet, ControlNet) isolated plant floors but resulted in integration headaches.

  • Network convergence: Industrial Ethernet protocols like PROFINET, EtherNet/IP, and increasingly TCP/IP gave way to standardized infrastructure—and with it, exposure.

  • Remote monitoring and the rise of malware: From Stuxnet (2010) to TRITON/TRISIS (2017), targeted attacks on programmable logic controllers and safety systems have underlined physical and digital convergence risk.

Structured risk, in this new reality, is no longer about isolation—it’s about control, observation, and resilience.


Network Architecture Fundamentals in Critical Environments

The Purdue Model: A Double-Edged Sword

The Purdue Enterprise Reference Architecture (PERA, aka the Purdue Model), formalized in the 1990s, still underpins most industrial network designs. It defines a layered architecture, from Level 0 (physical process) to Level 5 (enterprise cloud), with demilitarized zones (DMZs) isolating business and industrial systems.


  1. Level 0-1: Sensors, actuators, field devices.

  2. Level 2: Control systems (PLCs/DCS), HMIs.

  3. Level 3: Site operations, historians, manufacturing operations management (MOM).

  4. Level 4-5: Enterprise IT, cloud.

While foundational for defense-in-depth, PERA was not conceived for today’s reality: mobile operators, remote diagnostics, and cloud analytics. Static zoning and flat architectures are increasingly brittle in the face of agile business and threat landscapes.


Zone & Conduit Design: Beyond VLANs and Firewalls

It’s tempting to see VLAN segmentation, firewall rules, and ACLs as panaceas—drag, drop, done. In reality, secure OT architecture requires:


  • Zones: Network “segments” with shared risk profiles (e.g., control room network, plant floor, DMZ).

  • Conduits: Tightly controlled pathways between zones. Think OPC UA proxies, protocol-specific application gateways, or robust VPN tunnels—not just open TCP ports with “allow” rules.

  • Access: Layered authentication at every junction. If you think LDAP at the edge switch is enough, you’re probably missing the point.

Dealing with Legacy: Constraints and Opportunities

The median age of operational technology in the field is well over a decade. Devices lacking modern authentication, encryption, or even remotely patchable firmware introduce endemic risk. The right architectural strategy accepts these constraints:


  • Protocol whitelisting: Allow only documented, expected communications (e.g., only Modbus read/write between specific hosts).

  • Out-of-band monitoring: Deploy taps/SPANs with deep packet inspection to achieve “visibility without front door exposure.”

  • Jump servers and mediation: Force operator access through hardened brokers (for RDP, SSH, or web), with session logging and MFA.

IT/OT Collaboration: More Than a Handshake

Reality Check: Different Philosophies, Same Stakes

For many in OT, system stability has primacy—change control matters more than patch velocity; plant uptime is business continuity. For IT, protection often means patches and principle of least privilege. This cultural gap is not theoretical—it leads to real-world failures during incident response, upgrades, or even during “routine maintenance windows.”


Bridging this gap demands:


  • Joint governance: Shared risk models and incident response plans. You don't want your first cross-team meeting to be during an actual breach.

  • Asset inventory unification: IT must see the field assets; OT must see the patch state and network status; both must know which systems really matter.

  • Co-managed segmentation: Both teams reviewing firewall and routing changes. No one gets to say “just trust me.”

Protocols, Monitoring, and Logging: Building Common Language

  • Protocol literacy: Both sides should understand the criticality and mechanics of industrial protocols (Modbus, DNP3, S7, OPC UA). Many industrial protocols lack authentication and run in clear text; replacing, encapsulating, or strictly limiting them is non-negotiable.

  • Logging integration: Security analytics must blend logs from plant and enterprise systems. SIEMs should process (and correlate!) PLC logs, Windows events from engineering stations, and even output from serial protocol decoders.

  • Incident drill run-throughs: Tabletop and “live fire” exercises that cross the IT/OT divide.

Deploying Secure Connectivity: Real-World Approaches

Zero Trust: What it Means for Industrial Systems

“Zero Trust” is not a vendor product nor a silver bullet, but the principle is sound: every asset must continually prove trustworthiness before communicating. In industrial settings, this manifests as:


  • Micro-segmentation: Grouping assets by role, function, or risk, enforcing controls beyond the perimeter.

  • Identity-based access: MFA, short-lived certificates, and per-session policy checks for remote operations—enforced even when air gaps disappear due to remote access or mobile engineering laptops.

  • Continuous monitoring: Real-time detection of aberrant traffic—an engineering workstation shouldn't suddenly start scanning all PLCs on the subnet.

Secure Remote Access: Beyond the VPN

To service the modern plant, secure third-party and remote staff access is unavoidable. Common pitfalls:


  • Flat, broad VPNs: Where remote users suddenly exist “on” the plant network. This is still common, and a huge risk.

  • Brittle jump hosts: Unpatched RDP/VNC hosts or shared credentials.

Modern approaches require:


  • Granular session access: Grant access to a single asset or port for a limited time, with all actions logged and monitored.

  • Protocol brokers: Tools like secure remote protocol proxies, SSH jumphosts with session recording, or even cloud-based access mediation—if designed with strong controls.

  • Session oversight: Real-time “over-the-shoulder” monitoring of third-party sessions, with instant disconnect options if rule breaches occur.

Industrial Edge, Cloud, and the Next Frontier

Many organizations are running pilot projects for edge analytics, machine learning inference, or predictive maintenance that push data to the cloud—or run compute at the edge closer to the process.


  • Edge Compute Security: Treat every edge device as potentially hostile—harden OS, use TPM-backed key storage, audit firmware/software supply chain.

  • Cloud-integrated telemetry: Ensure egress paths for telemetry are whitelisted, TLS-protected, and cannot be hijacked for reverse shell attacks.

Practical Guidance: Where to Begin (or Improve)

  1. Conduct a topology and asset assessment: Where do the packets go? Diagram real paths, not idealized ones. You’ll find shadow IT/OT more often than not.

  2. Segment with intent: VLANs are a start, but you need firewalls, protocol inspection, and “default deny.” Treat every new connection request as an exception, not the rule.

  3. Instrument for visibility: Network taps on switch uplinks; behavioral anomaly detection; centralized syslog from every system that can speak it.

  4. Plan for layered access controls: Multi-factor for any sensitive console, jump host mediation for all remote/third-party access, conditional firewall rules by workstation and user identity.

  5. Incident and recovery readiness: Practice breach drills that cover both IT and OT, from detection to offline recovery. Never assume one side will just handle it.

Conclusion: Honest Realities, Continuous Vigilance

Engineering secure industrial and critical environments is not checklist work, nor can it ever become “set and forget.” Every architectural assumption eventually breaks—either from business need or attacker tenacity. The professionals responsible for these networks must hold onto their skepticism, favor provable controls over hopeful best practices, and build alliances across traditional boundaries.


And always, always ask: “Where do the packets really roam—and who’s watching?”.


A Note to New Engineers

Don’t mistake complexity for maturity, or tool count for security. Spend time tracing packets, reading old protocol specs, and asking why things are done this way. The safest networks are run by people who know both where things go wrong and how to explain why.


Background

Get in Touch with Trout team

Enter your information and our team will be in touch shortly.

Background

Get in Touch with Trout team

Enter your information and our team will be in touch shortly.