[content] => 
    [params] => Array
            [0] => /forum/index.php?threads/reliability-at-the-system-level.10364/

    [addOns] => Array
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270

    [wordpress] => /var/www/html

Reliability at the System Level


New member
I'm proposing some new thinking at the system level. Think about some of the little issues in an iOT home system. Something causes a signal to the home thermostat pushing it up to 90 F. You are out of town for two weeks and return to find dead plants and a huge utility bill. Similar thoughts occur relative to a connected home refrigerator/freezer. It could be a "glitch" or it could be a hacker, but let's take the glitch first.

We have to think about the reliability of the system to do what it is intended to do. The examples above could be the result of a power transient, or of an IC failure, or of a software failure. It is not enough that each piece of the system is "reliable." It is necessary for the system to be able to recover. In a home, the consequences are perhaps small, but in a larger context, look at industrial control or similar application. We can't go connecting everything without analyzing the possible results and providing recovery capability.

We also have to look at security and frankly, almost everything we build can be hacked. So recognizing the hack and throwing the hacker off the system is one kind of security. Another type of security is to provide better protection against hackers and that needs to be done in the worst way. ID and password are insufficient today. And finally the system has to recover and keep working!

I don't have an answer, I just see the need. I think that processors need better built in security, something that properly used is quite strong. I think the software world has to complete the circle by building software that recognizes the need to recover, perhaps with a special piece of hardware to help.

Bob McConnell
Agreed. On either front there needs to be a concept of system-level redundancy and self-defense. If something goes bad (or *is* bad), the system should be able to heal around the fault and/or isolate the fault. The greatest good of the greatest number. We already see this concept (to some extent) in mesh networks. I've written before about biological parallels for security where the system can attack or at least defend against viral/bacterial invaders. Promising area for research.
This looks like an ideal application for multilayer biometrics, that are not only passive, but active, in that they pick up voice, heartbeat or respiration characteristics as part of the required multi level security protocol.
I hope the industry goes this way to replace stand-alone passwords on computers and main system controllers. But think about how to apply biometrics to a controller deep inside a factory automation system. Biometrics is part of the solution to the security problem, but more is needed so hackers can’t attack a remote node and get to the main system controller. It’s possible that it could be a help for the reliability problem, but it seems inconvienent, at least, to install biometrics for the IT team on a remote node and keep it up to date.