Five things to get off your IT network

Networks have much longer lifespans than the computers that connect to them. It's not at all uncommon to find that the wires over which your traffic is moving are 25 years old - and that can extend to the switches, backbone fibres, and even DSL routers. Wiring cupboards have been neglected for so long that a generation of workers nowadays just assumes everything is wireless.

But the game is changing. When I say that I don't do any online banking unless I'm on a wired connection, people no longer suppress a titter: they nod wisely. My insistence has gone from being a tinfoil-hat choice to an understandable caution. As while, in theory, your network ought to be a safe space, it's more like underneath your bed from when you were six years old. Take a look and you may see nothing concerning: but turn your back and it's instantly filled with monsters and creepy crawlies, in the form of errors, resource holes and malicious intrusions.

Taking control of your network involves actions at all levels of the setup. Physical stuff, logical design, server positioning, and even cloud strategy all have parts to play in being able to assure insurers, clients, regulators and neighbours that your network is a known, secure quantity. The first step, though, is to identify the things that shouldn't be there at all and get rid of them.

Item One: Old cables

A bunch of network cables

From a management point of view, there's a certain logic to not worrying too much about cable cupboards that seem to be working perfectly well. The assumption is that everything important is either wireless, or on its way to becoming so.

It must also be said that Ethernet cables out in the wild generally endure pretty well, thanks to the twisted-pair construction: inside them, eight conductors are twisted together in twos, like strands of DNA, which makes them harder to stretch, snap or fray. This was surely a good thing back in the day when Ethernet was (by modern standards) slow, and cables were expensive.

Nowadays, though, when you can buy a Cat6 cable for a quid, it's not worth staking the stability of your business on ancient leads. It's not even worth testing to separate the good from the bad: replace all those crushed-looking, super-curly and superannuated patch leads with new ones, and draw a line under all the false positives, slow connections and intermittent faults.

Item Two: Unmanaged anything

The very first networks were pretty simple-minded assemblies of parts. The emphasis was on the "Ether" part of "Ethernet": your network was effectively made up of radio signals, captured and guided down wires. That was then. Today, the corporate network has become a series of single links between smart chips in client machines and smart ports on switches. Viewed from our modern perspective, the radio-like era of Ethernet looks like a first-try design.

Yet, there are plenty of companies that stick with the old-fashioned mindset. A modern Ethernet connection will connect to an old-school infrastructure, and data will flow to where it's sent - so if you can get a day's work done in a day, why do you need smarter, faster connections?

The problem is that you don't know what's travelling around your wires. Whether you have a rogue data problem or just a bad signal problem, if you're still using 20th-century kit, your options for monitoring and isolating certain bits of the network are going to be very limited indeed. So don't wait until things go wrong and you're left wondering why: from printers to doorbells, there's no reason why any device on your network shouldn't have a management interface that you can log into and query directly.

Don't worry that you're buying management overhead. It's true that the blurb for most managed switches focuses on high-powered uses for the infrastructure, of the sort which call for specialist, trained staff to configure and maintain. But you don't need to buy in at this level to get a benefit. The first two or three baby steps - such as just checking the rate of collisions on each connected device - are enough to open up a world of insight that can help you improve your network in all sorts of ways.

Item Three: Voodoo computers

The term may be fanciful, but the problem is very real. We're referring here to devices that demand special treatment and make specific, not necessarily desirable, demands of your network configuration. This is often a symptom of a bought-in product that, shall we say, hasn't been integrated and tested to professional standards - either for reasons of cost, or to get to market as early as possible and dazzle the optimistic technology buyer.

One example we saw was a CCTV package that was sold as allowing you to watch your premises from home on your iPad. The system itself turned out to comprise four little cameras, connected by long snaky composite video leads to the back of a small box running Windows 98. The lengthy documentation included instructions on opening a hole through your router and firewall to make the box directly accessible to the outside world. In a sense, you have to admire the bravery of this new invention: open-circuit television, with zero security, built on tech nearly two decades out of support and service.

There are plenty of other examples of terrifyingly naive or actively dangerous networked devices, which require your entire network to be configured around them, with no regard for the safety or integrity of anything else that might be connected. If such devices have already wormed their way into your business, it can be tempting to simply leave them in place: their bespoke accommodations will already have been made, and replacing them with better-behaved kit looks like spending money to replicate something you already have. However, you should realise that, as long as you allow these ad hoc creations onto your network, you're risking exposing your business to untold vulnerabilities, as well as costly contingencies.

Item Four: Excessively converged functions

Convergence is a bit of an aspirational buzzword in networking. Broadly, the idea is that since everything can travel over the network as packets, everything should do so. Storage blocks, video, web pages and Word documents - even Skype and regular phone system audio: at the transport level, it's all packet data. So why not save some money and send all your services over just one wire?

The answer is because not all packets are created equal. Storage packets don't need any special priority, but they can be very large. Audio packets for VoIP telephony are normally much smaller, but they don't like to be delayed, or re-sent after a collision. Video sometimes demands an attribute called Multicast, but not always; this can be misinterpreted by different levels of network switch or router, and most successful video conferencing apps are not at all bothered by its absence.

Convergence certainly has its place. It can greatly simplify large, old networks, reducing two or three enormous wiring diagrams down to one - as long as you're ready to spend on new smart central hardware to balance up all the differing loads and functions. However, for a convergent network to run nicely, you want something beyond the de facto business standard of Gigabit Ethernet over UTP: if you want a widespread standard for switches and wiring, 10GbE is where it really starts to work.

Indeed, for every argument to converge, there's a counter-argument. Splitting up your storage traffic, so big calls on SAN disks don't impede fast-moving packets coming out of your server, is much easier than it used to be: back in the days when unmanaged switches were the norm, coloured patch leads helped administrators keep track of different uses and address pools.

Today, modern switches allow you to segregate different subnets and physical ports both cleanly and traceably, entirely within software. You can log in from your iPad and interrogate both the physical state of a given port and its logical configuration - all at the same time.

Unfortunately, conversations about deconverging traffic are often acrimonious as a lot of people with network responsibilities in larger firms have strongly held positions - entrenched by years of defending both their budget and network performance from management attempts to rationalise both.

But if someone's still saying that convergence is the inevitable future, that suggests that they haven't yet thought through the question of resilience. The tendency is to benchmark your network in terms of metrics: how many devices it supports, how fast it goes, how big it is and so forth. However, when the network is the backbone of your business, those numbers are less important than knowing that your phone system won't go down when your website gets infected. It's difficult to quantify a system's resilience to attack, or its ability to detect and alert problems, but that doesn't mean those things aren't of huge importance.

Item Five: Guest devices

Visual representation of nodes on a network

The primary function of your network might be to help you make money - but there's a now pervasive sense, especially among smaller and more modern companies, that it's also a civic resource to be shared with your staff and all their visitors. After all, everyone's digitally dependent now, and failing to offer someone an internet connection is a hospitality failure on par with neglecting to offer them a seat.

The problem is that this idea (or its implementation) often rests on assumptions of good faith. Or, a mistaken sense of how hard it is to make your network sufficiently diverged, with enough subnets and firewalls to limit traffic from guest devices to private resources. I'm not saying you should enforce an iron-clad "no visitors" policy, but you need to make certain that guest devices are isolated.

Turning the screw on visitors may seem like a low priority, but do you want to adopt best practice or not? We know of a network engineer whose first job was as a marine technologist in the Royal Navy. These days he works on super yachts, and it turns out that any superyacht worthy of the name has three or four classes of network user, keeping the internal systems, the crew, the owner and charter guests all within their respective boxes. Well, you have to admit, it's a refreshing change from making examples out of sales versus manufacturing.

And the reality is that if you can't implement the same sort of regime - dividing up your network into areas with differing access permissions - then that should be a huge red flag, raising questions about your administration as a whole.