Cyberwarfare Technology: Is Too Much Secrecy Bad?

8 April 2008 (Last Updated April 8th, 2008 18:30)

Playing devil's advocate with secret technology.

Cyberwarfare Technology: Is Too Much Secrecy Bad?

Success may breed more success, but it also seems to breed problems. In the aftermath of Israel's air strike on Syria's Dayr az-Zawr nuclear facility, Israeli officials implied that the cyberattack capabilities used to shut down Syria's nationwide air defense were actually too secret.

Pinchas Buchris, head of Israel's defense ministry, sounded almost plaintive when he said that "now I have to find a way to explain these [cyberwarfare] capabilities to other people so that they understand."

So, to paraphrase former US defense secretary Robert McNamara, how much secrecy is enough?

"It is incredibly difficult to keep truly revolutionary technology under wraps"


To begin with, the costs of too much openness are fairly obvious. If your adversaries know something about your technology, they can devise ways to neutralise it, or at least reduce its effectiveness. Furthermore, if your adversaries know enough about your technology, they can reverse-engineer it for their own use.

Most decision-makers, and indeed most people generally, seem to be innately sensitive to the risks of disclosing too much, or too widely. Much of this fear probably stems from the profound consequences of compromised cryptology, such as Enigma in World War II.

Strategic or battlefield intelligence, however, is fundamentally different from technological intelligence – knowing how your enemy's weapons work, historically speaking, is less important than knowing where and when your enemy intends to use them.


Perhaps more critically, people are cognitively biased toward wishful thinking, such as the notion that your advantages are more durable than they really are. In fact, specific technological applications are wasting assets for a number of reasons.

To begin with, even the most secret applied R&D evolved at some point from open 'pure' R&D, which implies that there is no practical way to keep adversaries from developing your ostensibly secret technology on their own. In the case of the naval dreadnought, for example, two of the three key technologies (high-explosive large-caliber munitions and steam turbine propulsion) derived from commercially oriented innovations (dynamite and electrical generators, respectively).

Even if your adversaries don't come up with the original innovation, their intelligence collection efforts, including but not limited to, espionage, will probably pick up enough for their own scientists to fill in the gaps. The history of nuclear weapons development, from Soviet espionage on the Manhattan project to the Chinese replication of US miniaturised warhead technology in the 1990s, demonstrates how difficult it is to keep truly revolutionary technology under wraps.

Finally, once you start using the technology, the use itself leaves traces from which a clever adversary can imitate, or defeat, some of the technology used. For example, Syria and Hezbollah actually deployed their own UAVs against Israel during the 2006 Lebanon war. The likelihood of imitation grows with cumulative use: accidents happen, the enemy stumbles upon a weakness even your designers did not foresee, and so forth.

"Restricting the dissemination of technology means that you don't get as much use out of it as you should."

In other words, decision-makers overestimate the costs of too much openness because they usually overestimate the value of what they have and underestimate the value of what others might create. This tendency stems from a number of well-known cognitive biases (e.g. the availability bias and the fundamental attribution error), but it is still rather ironic given that most military disasters stem from misjudgments of available information and misallocation of resources rather than unilateral advantage.

After all, the British invented tanks in World War I, and the French had more tanks than Germany in 1940, but the Wehrmacht used what they had far better.


Keeping technology secret requires restricting relevant knowledge to small numbers of people and otherwise making diligent efforts to disguise the nature of your asset. Unfortunately, this creates opportunity costs that tend to be undervalued because they never become tangible enough for decision-makers to appreciate them fully.

Operationally, restricting the dissemination of technology means that you don't get as much use out of it as you should. To date, Israel has kept its G550 network attack aircraft, first used during the 2006 war in southern Lebanon against Hezbollah, segregated from the rest of the IAF, but tellingly, military officials believe that the effectiveness of these aircraft will improve significantly as they are increasingly distributed across the IDF's own network.

Technologically, over-secrecy retards development ability. One example of this is academic-military cooperation. "I know that in the US, universities are involved in these kinds of issues," Buchris says regarding cyberattack technology. "But in Israel, it's totally different" because the highly fertile Israeli research universities are cut out of the loop.