By Evan Schuman, Fortinet Blog

One of the most frightening lessons IT people quickly learn is that large complex systems—software, hardware and certainly operating systems—always do things that no one knew they could do (or expect them to do). That’s because these systems are created by multiple teams and each team member only bothers to document most (and certainly not all) of what their own module can do. Also, programmers often create backdoors during development to facilitate and accelerate inevitable fixes and repairs. Most, but not all, remember to remove them before launch.

That’s issue one. The second issue is unintended interactions. Two different apps may each work fine on their own, but no one ever tested them functioning together. A hole in the Starbucks mobile app that retained passwords in plain text, for example, was caused by the way Starbucks used a crash analytics program. The program worked fine in other mobile apps, so Starbucks had no way of anticipating that it would retain passwords when used slightly differently.

Years ago, I was talking with an Apple developer who was in charge of font issues with Macs. The manual gave a strict limit on the number of fonts the system could store, but he knew the workaround—he needed it himself so he designed a hidden exception—and briefed me on it. He said that he was the only guy who knew about it, as he hadn’t documented it. Multiply that by thousands of Apple developers and it paints a scary picture.

What brings this to mind is a recent security report of a new vulnerability within Apple systems. What makes this one different is that it’s not a bug or hole in the software. It’s in the hardware. Specifically, in the firmware. But wait. Isn’t that off-limits to user access? In theory, yes, infraestructurabut the problem is that access is permitted if the system goes to sleep and is awakened—but not when the system restarts normally.

The security researcher who found the hole, Pedro Vilaca, wrote about the implications. “It means that you can overwrite the contents of your BIOS from userland and rootkit EFI without any other trick other than a suspend-resume cycle, a kernel extension, flashrom, and root access,” Vilaca penned. “Wait, am I saying Macs EFI can be rootkitted from userland without all the tricks from Thunderbolt that Trammell presented? Yes I am! And that is one hell of a hole.”

There’s good and bad news from all of this—admittedly, it’s almost all bad. The good news is that this seems to directly be limited to older Mac systems. Bad News One: is that you only need one infected system in a semi-secure network to do a lot of damage. And most enterprises have quite a few old Mac desktops still being used, especially in design and CAD/CAM. Bad News Two: Apple is also behind the ubiquitous iPhone and the implications of a similar hole there are far more alarming.

But the biggest bad news is that this issue has been around for many years and has been undetected for much of that time. Critically, it also involves a part of the system that has historically been considered untouchable. Yes, it’s the elements that are considered safest that can house the deadliest malware.

Remember how long networked printers, copy machines and scanners were around enterprises before someone realized, “Gee, those devices have no firewalls, no VPNs, are completely unprotected and yet they have their own IP addresses and free run of all of our networks. What a wonderful way to enter our network and then have some Trusted Host fun”?

Based on how these systems are created—the multiple independent team approach—literally no one knows all of their capabilities and weaknesses. Then, when companies add homegrown apps along with off-the-shelf programs that are being combined in unexpected ways—don’t forget to add the new holes that crop up whenever any of those elements get updated—enterprise company 1234 could easily have holes that won’t exist anywhere else.

A big part of security has always been “strength in numbers.” IT waits to buy Version 1.3 of a product, figuring that any serious problems would be identified and fixed by then. Traditional anti-virus relies on crowdsourcing, which is why it won’t necessarily protect your department from anything if you encounter something novel. Traditional AV is always susceptible to the infamous zero day attack – the bit of malware that nobody has seen before. Unfortunately, almost every day has become a zero day, making faster, more agile systems that rely on threat intelligence even more critical.

The problem is that the strength in numbers approach is simply becoming less effective. Companies are customizing systems, developing internal applications, and relying on flexible frameworks instead of COTS tools for many of their needs. Often, we can no longer assume that with millions of people using a system, we’ll hear about vulnerabilities as they emerge. Nowhere is this more true than with mobile. At least with enterprise IT, there are theoretically some limitations to what software can be added to company-controlled units. With mobile—fueled by Bring Your Own Device (BYOD)—“company units” rarely exist anymore. Not only are user devices potentially contaminated with countless apps from unknown sources but unexpected interactions between personal and work-related apps will become far less rare.

The answer? Companies need to bring in—and, better yet, hire—security penetration testers and have them routinely attack all of their systems under real world conditions. No longer can you depend on using end-user guinea pigs to ferret out problems. Instead, you need to go after these problems proactively, seeking out those unexpected interactions, undocumented holes, and insecure custom applications. ISVs also need to take a closer look at the overall security of their applications in the aggregate – once all of the modules have been assembled. But that’s another problem for another day.

The “best” security holes—and this Apple firmware issue is the quintessential example—won’t become obvious until it is far too late. Cyberthieves have a vested interest in keeping these holes quiet so they can continue to be used. If that doesn’t scare you into coughing up some budget for internal security testing, nothing will.

While you’re at it, set up a mechanism to make it easier for independent security researchers to tell you about holes. And when they flag a problem, don’t even think of trying to punish them. Cyberthieves have an easy enough time doing their job. Let’s not make it any easier.