Everybody makes mistakes
This statement is many things in one: a criterion for considering a plan reliable, a guideline for self-improvement, a comforting admission when something goes awry, and possibly others I cannot think of off the top of my head.
A fair warning
This is a warning against relying too much on agreements and laws for high-risk assets, where a single mistake in the system may be able to completely nullify the guarantees it provides. Such rules can prevent mass violations, but relying on them as an individual is risky.
Possibly the most relevant application of this today is trusting a large company with your data, where no means protect that data against the company itself other than its own promise. Which is how most of the cloud services operate today, though not all. Security advice
Over the course of my life I have found many falling into the trap of striving to be perfect, as to not make any mistakes at all. This mindset is very prone to “analysis paralysis”, where no action is performed until the plan has no possible failure scenarios. That is usually never, either due to complexity of the domain that keeps throwing in new issues to consider, or due to a contradiction with something with no obvious workaround.
I have found much more productive an approach that does not consider all failure scenarios to the very end, instead of opting for consideration of failures in an increasing order of probability up to a certain threshold above zero. Every failure considered doesn’t necessarily have to be avoided entirely, but has to cause a limited impact that will not impede execution of other plans.