A German university student has demonstrated an effective way to get code of his choosing to run on the computers of software developers, at least some of whom work for US governmental and military organizations.

The eye-opening (if ethically questionable) research was conducted by University of Hamburg student Nikolai Philipp Tschacher as part of his bachelor thesis. Using a variation of a decade-old attack known as typosquatting, he uploaded his code to three popular developer communities and gave them names that were similar to widely used packages already submitted by other users. Over a span of several months, his imposter code was executed more than 45,000 times on more than 17,000 separate domains, and more than half the time his code was given all-powerful administrative rights. Two of the affected domains ended in .mil, an indication that people inside the US military had run his script.

"There were also 23 .gov domains from governmental institutions of the United States," Tschacher wrote in his thesis. "This number is highly alarming, because taking over hosts in US research laboratories and governmental institutions may have potentially disastrous consequences for them."

Attackers target of choice

Further Reading

Attackers who conduct espionage campaigns on government and corporate groups frequently regard developers as their target of choice. That's because developers have high-level access to sensitive networks and also have the control over the code that other people inside a targeted organization execute on their computers. Case in point: a string of attacks in 2013 that targeted software engineers inside Facebook, Microsoft, and Apple by first infecting an iPhone developer website the employees were known to visit.

In the months following the attacks, Facebook and many other large organizations began restricting or outright blocking Java, Flash, and other browser plugins known to be vulnerable to drive-by download attacks. Tschacher's research suggests that despite those measures, it may still be disturbingly easy for attackers to infect developers.

The 25-year-old student titled his thesis "Typosquatting in Programming Language Package Managers." The technique has its roots in so-called typosquatting attacks, in which attackers and phishers registered domains such as gooogle.com, appple.com, or similarly mistyped names that closely resemble trusted and widely visited domains. When end-users accidentally entered the names into their address bars, the typos sent their browsers to malicious imposter sites that masqueraded as legitimate destinations while pushing malware or trying to collect user passwords. Then, in 2011, security researcher Artem Dinaburg introduced an attack he called Bitsquatting. It built off the spirit of typosquatting but instead of relying on end users entering the wrong domain, it capitalized on random single-bit errors made by computers.

Tschacher's attack worked in a similar fashion. He first identified 214 of the most widely downloaded user-submitted packages on PyPI, RubyGems, and NPM, which are community websites for developers of the Python, Ruby, and JavaScript programming languages respectively. He then uploaded his untrusted code to the sites and gave them names that closely resembled the 214 packages. The student's benign script provided a warning that informed developers they may have inadvertently installed the wrong package. But before it did, the code sent a Web request to a university computer so he could keep track of how many times his untrusted code was executed and whether administrative rights were given.

It's not clear if the experiment broke ethical or even legal boundaries, since it relied on confusion if not outright deceit to trick people into installing something other than what they intended to install. Still, the lesson the experiment imparts is worth heeding.

"I think we're pretty well aware of the fact that if you install a random third-party module that no one has vetted and you don't know there is inherent risk because it could do anything," Azimuth Security senior researcher Dan Rosenberg told Ars. "The novelty here is [that] even if you know and trust a module, if you make a typo you could still be running untrusted code."