Skip to content

Commit 7700b3b

Browse files
committed
[gyb] Use lambda function to work around PEP 3114
[PEP 3114](https://www.python.org/dev/peps/pep-3114/) renamed `iterator.next()` (Python 2) to `iterator.__next__()` (Python 3). The recommended solution to make the code work in both Python 2 and 3 is to call the global `next` function. To use this recommended global `next` function this patch uses a lambda function to supply `tokenize.generate_tokens` a callable function as it was previously. This should be functionally equivalent to the old code with the added benefit of working on both Python 2 and 3.
1 parent feace85 commit 7700b3b

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

utils/gyb.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,7 @@ def splitGybLines(sourceLines):
304304
dedents = 0
305305
try:
306306
for tokenKind, tokenText, tokenStart, (tokenEndLine, tokenEndCol), lineText \
307-
in tokenize.generate_tokens(sourceLines.__iter__().next):
307+
in tokenize.generate_tokens(lambda i = iter(sourceLines): next(i)):
308308

309309
if tokenKind in (tokenize.COMMENT, tokenize.ENDMARKER):
310310
continue
@@ -347,7 +347,7 @@ def codeStartsWithDedentKeyword(sourceLines):
347347
"""
348348
tokenText = None
349349
for tokenKind, tokenText, _, _, _ \
350-
in tokenize.generate_tokens(sourceLines.__iter__().next):
350+
in tokenize.generate_tokens(lambda i = iter(sourceLines): next(i)):
351351

352352
if tokenKind != tokenize.COMMENT and tokenText.strip() != '':
353353
break

0 commit comments

Comments
 (0)