Motivated by recent developments in cyberwarfare, we study deterrence in a world where attacks cannot be perfectly attributed to attackers. In the model, each of $$ n $$ attackers may attack the defender. The defender observes a noisy signal that probabilistically attributes the attack. The defender may retaliate against one or more attackers and wants to retaliate against the guilty attacker only. We note an endogenous strategic complementarity among the attackers: if one attacker becomes more aggressive, that attacker becomes more “suspect” and the other attackers become less suspect, which leads the other attackers to become more aggressive as well. Despite this complementarity, there is a unique equilibrium. We identify types of improvements in attribution that strengthen deterrence—namely, improving attack detection independently of any effect on the identifiability of the attacker, reducing false alarms, or replacing misidentification with non-detection. However, we show that other improvements in attribution can backfire, weakening deterrence—these include detecting more attacks where the attacker is difficult to identify or pursuing too much certainty in attribution. Deterrence is improved if the defender can commit to a retaliatory strategy in advance, but the defender should not always commit to retaliate more after every signal.