Skip to content

Commit

Permalink
Fix signature threshold verification
Browse files Browse the repository at this point in the history
Prior to this commit metadadata signature verification as provided
by `tuf.sig.verify()` and used e.g. in `tuf.client.updater` counted
multiple signatures with identical authorized keyids each
separately towards the threshold. This behavior practically
subverts the signature thresholds check.

This commit fixes the issue by counting identical authorized keyids
only once towards the threshold.

The commit further clarifies the behavior of the relevant functions
in the `sig` module, i.e. `get_signature_status` and `verify` in
their respective docstrings. And adds tests for those functions and
also for the client updater.

---

NOTE: With this commit signatures with different authorized keyids
still each count separately towards the threshold, even if the
keyids identify the same key. If this behavior is not desired, I
propose the following fix instead. It verifies uniqueness of keys
(and not keyids):

```
diff --git a/tuf/sig.py b/tuf/sig.py
index ae9bae15..5392e596 100755
--- a/tuf/sig.py
+++ b/tuf/sig.py
@@ -303,7 +303,14 @@ def verify(signable, role, repository_name='default', threshold=None,
   if threshold is None or threshold <= 0: #pragma: no cover
     raise securesystemslib.exceptions.Error("Invalid threshold: " + repr(threshold))

-  return len(good_sigs) >= threshold
+  # Different keyids might point to the same key
+  # To be safe, check against unique public key values
+  unique_good_sig_keys = set()
+  for keyid in good_sigs:
+    key = tuf.keydb.get_key(keyid, repository_name)
+    unique_good_sig_keys.add(key["keyval"]["public"])
+
+  return len(unique_good_sig_keys) >= threshold

```

Signed-off-by: Lukas Puehringer <lukas.puehringer@nyu.edu>
  • Loading branch information
lukpueh committed Jan 9, 2020
1 parent b913d88 commit a0397c7
Show file tree
Hide file tree
Showing 3 changed files with 141 additions and 31 deletions.
60 changes: 60 additions & 0 deletions tests/test_sig.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@

import unittest
import logging
import copy

import tuf
import tuf.log
Expand Down Expand Up @@ -384,6 +385,65 @@ def test_verify_single_key(self):
tuf.roledb.remove_role('Root')



def test_verify_must_not_count_duplicate_keyids_towards_threshold(self):
# Create and sign dummy metadata twice with same key
signable = {"signed" : "test", "signatures" : []}
signed = securesystemslib.formats.encode_canonical(
signable["signed"]).encode("utf-8")
signable["signatures"].append(

This comment has been minimized.

Copy link
@trishankatdatadog

trishankatdatadog Jan 9, 2020

Member

More robust code to just compute the signature once, and append it twice, no?

This comment has been minimized.

Copy link
@lukpueh

lukpueh Jan 10, 2020

Author Member

You are right, @trishankatdatadog, and I had it like that at first but then thought that having two different signatures (we use non-deterministic rsassa-pss here) shows that we don't only detect duplicate signatures but also different signatures from the same key. Does that make sense? Should I add a comment?

This comment has been minimized.

Copy link
@lukpueh

lukpueh Jan 10, 2020

Author Member

Added a comment in 67a3a7a.

securesystemslib.keys.create_signature(KEYS[0], signed))
signable["signatures"].append(
securesystemslib.keys.create_signature(KEYS[0], signed))

# 'get_signature_status' uses keys from keydb for verification
tuf.keydb.add_key(KEYS[0])

# Assert that 'get_signature_status' returns two good signatures ...
status = tuf.sig.get_signature_status(
signable, "root", keyids=[KEYS[0]["keyid"]], threshold=2)
self.assertTrue(len(status["good_sigs"]) == 2)

This comment has been minimized.

Copy link
@trishankatdatadog

trishankatdatadog Jan 9, 2020

Member

Again, I have my reservations about basically correcting dangerous output later on in verify(), even though this technically meets the semantics of the current API, I guess.

This comment has been minimized.

Copy link
@lukpueh

lukpueh Jan 10, 2020

Author Member

See a0397c7#commitcomment-36742439 for my rationale.


# ... but only one counts towards the threshold
self.assertFalse(
tuf.sig.verify(signable, "root", keyids=[KEYS[0]["keyid"]], threshold=2))

# Clean-up keydb
tuf.keydb.remove_key(KEYS[0]["keyid"])



def test_verify_count_different_keyids_for_same_key_towards_threshold(self):
# Create and sign dummy metadata twice with same key but different keyids
signable = {"signed" : "test", "signatures" : []}
key_sha256 = copy.deepcopy(KEYS[0])
key_sha256["keyid"] = "deadbeef256"

key_sha512 = copy.deepcopy(KEYS[0])
key_sha512["keyid"] = "deadbeef512"

signed = securesystemslib.formats.encode_canonical(
signable["signed"]).encode("utf-8")
signable["signatures"].append(
securesystemslib.keys.create_signature(key_sha256, signed))
signable["signatures"].append(
securesystemslib.keys.create_signature(key_sha512, signed))

# 'get_signature_status' uses keys from keydb for verification
tuf.keydb.add_key(key_sha256)
tuf.keydb.add_key(key_sha512)

# Assert that both keys count towards threshold although its the same key
keyids = [key_sha256["keyid"], key_sha512["keyid"]]
self.assertTrue(
tuf.sig.verify(signable, "root", keyids=keyids, threshold=2))

This comment has been minimized.

Copy link
@trishankatdatadog

trishankatdatadog Jan 9, 2020

Member

This is a damned good test. Seems wrong to me that the same key would be counted twice just because the keyids are different (due to different hash algorithms). This suggests to me that we need a way of grouping the same keys together in TUF metadata, even if different keyids.

This comment has been minimized.

Copy link
@lukpueh

lukpueh Jan 10, 2020

Author Member

Thanks! :) I agree with you that we should count unique keys instead of keyids, that's why I added the NOTE to sig.verify, suggested an alternative patch (see commit message) and created this test (if we use that patch we can simply switch to assertFalse here).

I committed the minimal fix (i.e. only unique-ify keyids) here because it is enough to close the vulnerability. An attacker who does not have a threshold of keys cannot authorize different keyids for the same key.

How TUF should behave if multiple keyids for the same key are legitimately authorized is something that IMO should be defined in the specification and requires more discussion. Right now the spec does not even allow different keyids for the same key (see tuf-spec.md#L572):

The KEYID of a key is the hexdigest of the SHA-256 hash of the
canonical JSON form of the key.

The feature that allows this (i.e. keyid_hash_algorithms) only exists in the reference implementation and its specification is subject to an ongoing discussion in #848.


# Clean-up keydb
tuf.keydb.remove_key(key_sha256["keyid"])
tuf.keydb.remove_key(key_sha512["keyid"])



def test_verify_unrecognized_sig(self):
signable = {'signed' : 'test', 'signatures' : []}
signed = securesystemslib.formats.encode_canonical(signable['signed']).encode('utf-8')
Expand Down
47 changes: 47 additions & 0 deletions tests/test_updater.py
Original file line number Diff line number Diff line change
Expand Up @@ -425,6 +425,53 @@ def test_1__update_versioninfo(self):



def test_1__refresh_must_not_count_duplicate_keyids_towards_threshold(self):
# Update root threshold on the server repository and sign twice with 1 key
repository = repo_tool.load_repository(self.repository_directory)
repository.root.threshold = 2
repository.root.load_signing_key(self.role_keys['root']['private'])

# The client uses the threshold from the previous root file to verify the
# new root. Thus we need to make two updates so that the threshold used for
# verification becomes 2. I.e. we bump the version, sign twice with the
# same key and write to disk '2.root.json' and '3.root.json'.
for version in [2, 3]:
repository.root.version = version
info = tuf.roledb.get_roleinfo("root")
metadata = repo_lib.generate_root_metadata(
info["version"], info["expires"], False)
signed_metadata = repo_lib.sign_metadata(
metadata, info["keyids"], "root.json", "default")
signed_metadata["signatures"].append(signed_metadata["signatures"][0])
live_root_path = os.path.join(
self.repository_directory, "metadata", "root.json")

# Bypass server side verification in 'write' or 'writeall', which would
# catch the unmet threshold.
# We also skip writing to 'metadata.staged' and copying to 'metadata' and
# instead write directly to 'metadata'
repo_lib.write_metadata_file(signed_metadata, live_root_path, info["version"], True)


# Update from current '1.root.json' to '3.root.json' on client and assert
# raise of 'BadSignatureError' (caused by unmet signature threshold).
try:
self.repository_updater.refresh()

except tuf.exceptions.NoWorkingMirrorError as e:
mirror_errors = list(e.mirror_errors.values())
self.assertTrue(len(mirror_errors) == 1)
self.assertTrue(
isinstance(mirror_errors[0],
securesystemslib.exceptions.BadSignatureError))
self.assertEqual(
str(mirror_errors[0]),
repr("root") + " metadata has bad signature.")

else:
self.fail(
"Expected a NoWorkingMirrorError composed of one BadSignatureError")


def test_1__update_fileinfo(self):
# Tests
Expand Down
65 changes: 34 additions & 31 deletions tuf/sig.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,11 +71,22 @@ def get_signature_status(signable, role=None, repository_name='default',
"""
<Purpose>
Return a dictionary representing the status of the signatures listed in
'signable'. Given an object conformant to SIGNABLE_SCHEMA, a set of public
keys in 'tuf.keydb', a set of roles in 'tuf.roledb', and a role,
the status of these signatures can be determined. This method will iterate
the signatures in 'signable' and enumerate all the keys that are valid,
invalid, unrecognized, or unauthorized.
'signable'. Signatures in the returned dictionary are identified by the
signature keyid and can have a status of either:
* bad -- Invalid signature
* good -- Valid signature from key that is available in 'tuf.keydb', and is
authorized for the passed role as per 'tuf.roledb' (authorization may be
overwritten by passed 'keyids').
* unknown -- Signature from key that is not available in 'tuf.keydb'.

This comment has been minimized.

Copy link
@trishankatdatadog

trishankatdatadog Jan 9, 2020

Member

More like if role is not None, no?

This comment has been minimized.

Copy link
@lukpueh

lukpueh Jan 10, 2020

Author Member

Fixed the docstring in 67a3a7a.

* unknown signing schemes -- Signature from key with unknown signing
scheme.
* untrusted -- Valid signature from key that is available in 'tuf.keydb',
but is not trusted for the passed role as per 'tuf.roledb' or the passed
'keyids'.
NOTE: The result may contain duplicate keyids or keyids that reference the
same key, if 'signable' lists multiple signatures from the same key.
<Arguments>
signable:
Expand All @@ -87,7 +98,7 @@ def get_signature_status(signable, role=None, repository_name='default',
Conformant to tuf.formats.SIGNABLE_SCHEMA.
role:
TUF role (e.g., 'root', 'targets', 'snapshot').
TUF role string (e.g. 'root', 'targets', 'snapshot' or timestamp).
threshold:
Rather than reference the role's threshold as set in tuf.roledb.py, use
Expand Down Expand Up @@ -133,22 +144,6 @@ def get_signature_status(signable, role=None, repository_name='default',

# The signature status dictionary returned.
signature_status = {}

# The fields of the signature_status dict, where each field stores keyids. A
# description of each field:
#
# good_sigs = keys confirmed to have produced 'sig' using 'signed', which are
# associated with 'role';
#
# bad_sigs = negation of good_sigs;
#
# unknown_sigs = keys not found in the 'keydb' database;
#
# untrusted_sigs = keys that are not in the list of keyids associated with
# 'role';
#
# unknown_signing_scheme = signing schemes specified in keys that are
# unsupported;
good_sigs = []
bad_sigs = []
unknown_sigs = []
Expand Down Expand Up @@ -240,18 +235,26 @@ def verify(signable, role, repository_name='default', threshold=None,
keyids=None):
"""
<Purpose>
Verify whether the authorized signatures of 'signable' meet the minimum
required by 'role'. Authorized signatures are those with valid keys
associated with 'role'. 'signable' must conform to SIGNABLE_SCHEMA
and 'role' must not equal 'None' or be less than zero.
Verify that 'signable' has a valid threshold of authorized signatures
identified by unique keyids. The threshold and whether a keyid is
authorized is determined by querying the 'threshold' and 'keyids' info for
the passed 'role' in 'tuf.roledb'. Both values can be overwritten by
passing the 'threshold' or 'keyids' arguments.
NOTE:
- Signatures with identical authorized keyids only count towards the
threshold once.
- Signatures with different authorized keyids each count towards the
threshold, even if the keyids identify the same key.
<Arguments>
signable:
A dictionary containing a list of signatures and a 'signed' identifier.
A dictionary containing a list of signatures and a 'signed' identifier
that conforms to SIGNABLE_SCHEMA, e.g.:
signable = {'signed':, 'signatures': [{'keyid':, 'method':, 'sig':}]}
role:
TUF role (e.g., 'root', 'targets', 'snapshot').
TUF role string (e.g. 'root', 'targets', 'snapshot' or timestamp).

This comment has been minimized.

Copy link
@SantiagoTorres

SantiagoTorres Jan 10, 2020

Member

should be 'timestamp' for consistency, I think

threshold:
Rather than reference the role's threshold as set in tuf.roledb.py, use
Expand All @@ -278,8 +281,8 @@ def verify(signable, role, repository_name='default', threshold=None,
get_signature_status() will be caught here and re-raised.
<Returns>
Boolean. True if the number of good signatures >= the role's threshold,
False otherwise.
Boolean. True if the number of good unique (by keyid) signatures >= the
role's threshold, False otherwise.
"""

tuf.formats.SIGNABLE_SCHEMA.check_match(signable)
Expand All @@ -303,7 +306,7 @@ def verify(signable, role, repository_name='default', threshold=None,
if threshold is None or threshold <= 0: #pragma: no cover
raise securesystemslib.exceptions.Error("Invalid threshold: " + repr(threshold))

return len(good_sigs) >= threshold
return len(set(good_sigs)) >= threshold

This comment has been minimized.

Copy link
@trishankatdatadog

trishankatdatadog Jan 9, 2020

Member

Is there a good reason why we unique-ify good_sigs here instead of at the source (get_signature_status())?

This comment has been minimized.

Copy link
@lukpueh

lukpueh Jan 10, 2020

Author Member

I did not want to only unique-ify good_sigs in get_signature_status(), because it strips legitimate information. That is, if the signable has multiple good sigs from the same key, IMO the function should report so (also see NOTE in get_signatures_status()'s docstring).

If you think it's better to do this at the source, I suggest to update SIGNATURESTATUS_SCHEMA to allow returning both, a list of all good and a list of unique good signatures. The change would be slightly more invasive and that's why I didn't do it right away, but I'm reconsider. :)




Expand Down

0 comments on commit a0397c7

Please # to comment.