,

Trustworthiness and Truthfulness Are Essential

Their Absence Can Introduce Huge Risks

By Peter G. Neumann

Trustworthiness is fundamental to our technologies and to our human relationships. Over-trusting something that is not trustworthy often leads to bad results. Not trusting something that really is trustworthy can also be harmful.

Trustworthiness should be a basic requirement of all computer-related systems — particularly those used in mission-critical applications, but also in personal situations, such as maintaining your own quality of life. It is essential to the proper behavior of computers and networks, and to the well-being of entire nations and industries that rely on proper behavior of their computer-based enterprises.

Computer-Based Systems and People

Trustworthy system behavior starts with trustworthy people; system designers, hardware developers and programmers, operational staff, and high-level managers. Many systems that might appear to be relatively trustworthy can nevertheless be seriously compromised by malicious malware, external adversaries, and insider misuse, or otherwise disrupted by denial-of-service attacks. If such compromises arise unexpectedly, then those systems were most likely not so trustworthy as had been believed.

Thus, we need system designs and implementations that are tolerant of people who might usually be trustworthy but who make occasional errors, as well as systems that are resistant to and resilient following many other potential adversities. More importantly, we need measures of assurance—which assess how trustworthy a system might be in specific circumstances (albeit typically evaluated only against perceived threats). Unfortunately, some of the assumptions made prior to the evaluation process may have been wrong, or may change over time—for example, as new types of threats are detected and exploited.

In addition to trustworthiness or untrustworthiness of people’s interactions with computers in the above sense, trustworthiness and specifically personal integrity are also critical for people and governments in their daily existence. In particular, truthfulness and honesty are typically thought of as trustworthiness attributes of people. Whether a particular computer system is honest would generally not be considered, because such a system has no moral compass to guide it, but truthfulness is another matter. A system might well be considered dishonest or even untruthful if it consistently or even intermittently gives wrong answers just in certain cases— especially if it had been programmed explicitly to do exactly that. A case in point is the behavior that has been associated with certain proprietary voting systems—see Douglas W. Jones and Barbara Simons, Broken Ballots, University of Chicago Press, 2012.

Systems also can be untrustworthy because of false assumptions by the programmers and designers. For example, sensors measure whatever they are designed to measure, but that may not include the variables that should be of greatest concern. Thus, a system assessing the slipperiness of the road for a vehicle might rely upon a sensor that determines whether the road is wet–sometimes as simple as checking whether the windshield wipers are on. This rather indirect measure of slipperiness can lead to false or imprecise recommendations or actions. At least one commercial aviation accident resulted from an indirect and imprecise determination of runway slipperiness.

Risks of Believing in Computer Trustworthiness

Computers are infallible and cannot lie—right? Logic suggests otherwise.  Computers are created by people who are not infallible and even occasionally lie. So, we might logically conclude that computers cannot be infallible. Indeed, given the presence of hardware errors, power outages, malware, hacking attacks, and other adversities, they cannot always perform exactly as expected.

In fact, computers can be made to lie, cheat, or steal. In such cases, of course, the faults originate with or are amplified by people who commission, design or program systems, or even just use them, but not with the computers themselves. However, even supposedly ‘neutral’ learning algorithms and statistics can be biased and untrustworthy if they are presented with a biased or untrustworthy learning set. Unfortunately, the complexity of systems makes such behavior difficult to detect. Worse, many statistical learning algorithms (for example, deep learning) and artificial intelligence cannot specify how they actually reached their decisions, making it difficult to assess their validity.

That puts some people at a particular disadvantage. For example, people who believe that online gambling is “fair” are likely to be easy victims, and so are those who know it is not fair, but are nevertheless addicted (see Francis X. Clines, “Threatened with Ruin at the Virtual Casino,” The New York Times, Feb. 5, 2017). People who believe that elections based on Internet voting and proprietary unauditable voting machines are inherently “fair” can be easily misled. People who continue to believe that Russians had no influence on the November 2016 election in the U.S. or in the April preliminary elections in France are oblivious to real evidence in both cases. In fact, The Netherlands recently abandoned electronic voting systems, returning to paper ballots— further evidence that numerous governments believe there is ongoing Russian interference.

Risks of Believing in Human Truthfulness and Integrity

While trust in other humans is a positive value, it also can pose grave risks. Human creativity can have its downsides; ransomware, cyberfraud, cybercrime, and even spam all seem to be not only increasing, but becoming much more sophisticated. Social engineering is still a simple and effective way to break into otherwise secure facilities or computer systems. It takes advantage of normal human decency, helpfulness, politeness, and altruism, yet a knee-jerk attempt to rein in social engineering could involve eliminating these very desirable social attributes (which might also erode civility and decency from our society).

How Does This All Fit Together?

It should be fundamental by now that individual solutions to local problems are not likely to be sufficient.  We have to consider trustworthiness in the total-system context that includes hardware, software, networking, people, environmental concerns, and more. On September 22, 1988, Bob Morris (then chief scientist of the National Computer Security Center at NSA) said in a session of the National Academies’ Computer Science and Telecommunications [now Technology] Board on problems relating to security, “To a first approximation, every computer in the world is connected with every other computer.” Almost 30 years later, that is more true than ever. Similarly, all of the security and risk issues involving computers and people may be intimately intertwined.

Science is never perfect or immutable — it is often a work in progress. Hence, scientists almost never know they have a final answer. Fortunately, scientific methods have evolved over time, and scientists generally welcome challenges and disagreements that can ultimately be resolved through better theories, experimental evidence, and rational debate. Occasionally, we even expose fake science and untrustworthy scientists via peer pressure. Where science has strong credible evidence, it deserves to be respected—because in the final analysis reality should be able to trump fantasies (although this in fact may not work).

Even though truth is in flux; even though it is relative, not absolute; and even though it often comes with many caveats, truth matters. Paraphrasing Albert Einstein, “Everything should be stated as simply as possible, but not simpler.” Oversimplifications, lack of foresight, and excessive subjectivity are often sources of serious misunderstandings, and can result in major catastrophes. People who believe everything they read on Facebook, Google, Amazon, Twitter, and other Internet sites are clearly delusional.

Conclusion

People who are less aware of technology-related risks tend to underestimate a computer’s potential to make mistakes. Neither computer behavior nor human behavior is always perfect; blindly believing they are engenders significant risks. We shouldn’t believe everything we read on the Internet in the absence of credible corroboration, just as we should not believe people who pervasively dishonor truthfulness.

Unfortunately, the trends for the future seem relatively bleak. The absence of computer system trustworthiness raises troubling implications. As a recent article by Bruce G. Blair (Why Our Nuclear Weapons Can Be Hacked, The New York Times, Mar. 14, 2017) suggests, “Loose security invites a cyberattack with possibly horrific consequences.” Semi- and fully autonomous systems, the seemingly imminent Internet of Things, and artificial intelligence are providing further examples in which increasing complexity leads to obscure and unexplainable system behavior. It seems the concept of determining trustworthiness of systems and people through objective evidence is being supplanted by blind faith—without any strong cases being made for safety, security, or even the kind of assurance that is required in other regulated critical industries such as aviation.

However, the ultimate danger, may involve “alternative facts”, which extensively undermine institutional trustworthiness.  In the face of vast institutionally-sanctioned disinformation, assessing truth and trustworthiness may be more important than ever before.
___________________________________________________

Peter G. Neumann (neumann@csl.sri.com) moderates the ACM Risks Forum and is Senior Principal Scientist in SRI International’s Computer Science Lab. A similar version of this article appeared in CACM Inside Risks in June 2017.