A while ago my coworkers and I were talking about refactoring our code and eliminating duplications to enhance a level of modularity and increase our "Single Point of Failure" ratio. This allows maintenance of code to be easier. However, it made me wonder just how much of a dual nature the concept of "Single Point of Failure" really is.
In the virtual world, developers and regular people tend to aim for a single point of failure, meaning that they are trying to consolidate all their code or applications into one place, perhaps to allow easier management of resources. In these cases increasing the level of singularity is a good thing. However, for real-world systems, a single point of failure is a bad thing.
When working with hardware, particularly servers or even backup storage solutions, having a single device that handles most interactions is a bad thing. Should the hardware go bad, then you are really up a creek. This is where redundancy is a good thing - having multiple devices that handle the work, so should one fail, there is another that can pick up the slack.
Now, some may say that there are cases where having both at the same time is good, and with the advent of virtual machines and systems running in parallel, this is certainly true, especially in enterprise-level server environments. For example, as the developer of software, you want to write code so that maintenance is quick and easy, since downtime for a particular enterprise application will in turn cost the company large sums of money - time is money after all. However, to alleviate downtime altogether, multiple servers run the same application and are synchronized to allow for as much up-time as possible should the hardware on one server fail (think Active Directory environments).
I just think that it is interesting, from a software developer point-of-view, the duality of the concept of the "Single Point of Failure," but totally understand its application and benefits in both directions.