ext: Update ply to version 3.11.
We had been using version 3.2 from 2009, which does not have support for t_eof(). Change-Id: Id5610a272fe2cecd586991f4c59f3ec77184164e Reviewed-on: https://gem5-review.googlesource.com/c/public/gem5/+/56342 Reviewed-by: Jason Lowe-Power <power.jg@gmail.com> Reviewed-by: Bobby Bruce <bbruce@ucdavis.edu> Maintainer: Jason Lowe-Power <power.jg@gmail.com> Tested-by: kokoro <noreply+kokoro@google.com>
This commit is contained in:
@@ -1,13 +1,12 @@
|
||||
March 24, 2009
|
||||
February 15, 2018
|
||||
|
||||
Announcing : PLY-3.2 (Python Lex-Yacc)
|
||||
Announcing : PLY-3.11 (Python Lex-Yacc)
|
||||
|
||||
http://www.dabeaz.com/ply
|
||||
|
||||
I'm pleased to announce a significant new update to PLY---a 100% Python
|
||||
implementation of the common parsing tools lex and yacc. PLY-3.2 adds
|
||||
compatibility for Python 2.6 and 3.0, provides some new customization
|
||||
options, and cleans up a lot of internal implementation details.
|
||||
I'm pleased to announce PLY-3.11--a pure Python implementation of the
|
||||
common parsing tools lex and yacc. PLY-3.11 is a minor bug fix
|
||||
release. It supports both Python 2 and Python 3.
|
||||
|
||||
If you are new to PLY, here are a few highlights:
|
||||
|
||||
|
||||
338
ext/ply/CHANGES
338
ext/ply/CHANGES
@@ -1,3 +1,341 @@
|
||||
Version 3.11
|
||||
---------------------
|
||||
02/15/18 beazley
|
||||
Fixed some minor bugs related to re flags and token order.
|
||||
Github pull requests #151 and #153.
|
||||
|
||||
02/15/18 beazley
|
||||
Added a set_lexpos() method to grammar symbols. Github issue #148.
|
||||
|
||||
|
||||
04/13/17 beazley
|
||||
Mostly minor bug fixes and small code cleanups.
|
||||
|
||||
Version 3.10
|
||||
---------------------
|
||||
01/31/17: beazley
|
||||
Changed grammar signature computation to not involve hashing
|
||||
functions. Parts are just combined into a big string.
|
||||
|
||||
10/07/16: beazley
|
||||
Fixed Issue #101: Incorrect shift-reduce conflict resolution with
|
||||
precedence specifier.
|
||||
|
||||
PLY was incorrectly resolving shift-reduce conflicts in certain
|
||||
cases. For example, in the example/calc/calc.py example, you
|
||||
could trigger it doing this:
|
||||
|
||||
calc > -3 - 4
|
||||
1 (correct answer should be -7)
|
||||
calc >
|
||||
|
||||
Issue and suggested patch contributed by https://github.com/RomaVis
|
||||
|
||||
Version 3.9
|
||||
---------------------
|
||||
08/30/16: beazley
|
||||
Exposed the parser state number as the parser.state attribute
|
||||
in productions and error functions. For example:
|
||||
|
||||
def p_somerule(p):
|
||||
'''
|
||||
rule : A B C
|
||||
'''
|
||||
print('State:', p.parser.state)
|
||||
|
||||
May address issue #65 (publish current state in error callback).
|
||||
|
||||
08/30/16: beazley
|
||||
Fixed Issue #88. Python3 compatibility with ply/cpp.
|
||||
|
||||
08/30/16: beazley
|
||||
Fixed Issue #93. Ply can crash if SyntaxError is raised inside
|
||||
a production. Not actually sure if the original implementation
|
||||
worked as documented at all. Yacc has been modified to follow
|
||||
the spec as outlined in the CHANGES noted for 11/27/07 below.
|
||||
|
||||
08/30/16: beazley
|
||||
Fixed Issue #97. Failure with code validation when the original
|
||||
source files aren't present. Validation step now ignores
|
||||
the missing file.
|
||||
|
||||
08/30/16: beazley
|
||||
Minor fixes to version numbers.
|
||||
|
||||
Version 3.8
|
||||
---------------------
|
||||
10/02/15: beazley
|
||||
Fixed issues related to Python 3.5. Patch contributed by Barry Warsaw.
|
||||
|
||||
Version 3.7
|
||||
---------------------
|
||||
08/25/15: beazley
|
||||
Fixed problems when reading table files from pickled data.
|
||||
|
||||
05/07/15: beazley
|
||||
Fixed regression in handling of table modules if specified as module
|
||||
objects. See https://github.com/dabeaz/ply/issues/63
|
||||
|
||||
Version 3.6
|
||||
---------------------
|
||||
04/25/15: beazley
|
||||
If PLY is unable to create the 'parser.out' or 'parsetab.py' files due
|
||||
to permission issues, it now just issues a warning message and
|
||||
continues to operate. This could happen if a module using PLY
|
||||
is installed in a funny way where tables have to be regenerated, but
|
||||
for whatever reason, the user doesn't have write permission on
|
||||
the directory where PLY wants to put them.
|
||||
|
||||
04/24/15: beazley
|
||||
Fixed some issues related to use of packages and table file
|
||||
modules. Just to emphasize, PLY now generates its special
|
||||
files such as 'parsetab.py' and 'lextab.py' in the *SAME*
|
||||
directory as the source file that uses lex() and yacc().
|
||||
|
||||
If for some reason, you want to change the name of the table
|
||||
module, use the tabmodule and lextab options:
|
||||
|
||||
lexer = lex.lex(lextab='spamlextab')
|
||||
parser = yacc.yacc(tabmodule='spamparsetab')
|
||||
|
||||
If you specify a simple name as shown, the module will still be
|
||||
created in the same directory as the file invoking lex() or yacc().
|
||||
If you want the table files to be placed into a different package,
|
||||
then give a fully qualified package name. For example:
|
||||
|
||||
lexer = lex.lex(lextab='pkgname.files.lextab')
|
||||
parser = yacc.yacc(tabmodule='pkgname.files.parsetab')
|
||||
|
||||
For this to work, 'pkgname.files' must already exist as a valid
|
||||
Python package (i.e., the directories must already exist and be
|
||||
set up with the proper __init__.py files, etc.).
|
||||
|
||||
Version 3.5
|
||||
---------------------
|
||||
04/21/15: beazley
|
||||
Added support for defaulted_states in the parser. A
|
||||
defaulted_state is a state where the only legal action is a
|
||||
reduction of a single grammar rule across all valid input
|
||||
tokens. For such states, the rule is reduced and the
|
||||
reading of the next lookahead token is delayed until it is
|
||||
actually needed at a later point in time.
|
||||
|
||||
This delay in consuming the next lookahead token is a
|
||||
potentially important feature in advanced parsing
|
||||
applications that require tight interaction between the
|
||||
lexer and the parser. For example, a grammar rule change
|
||||
modify the lexer state upon reduction and have such changes
|
||||
take effect before the next input token is read.
|
||||
|
||||
*** POTENTIAL INCOMPATIBILITY ***
|
||||
One potential danger of defaulted_states is that syntax
|
||||
errors might be deferred to a a later point of processing
|
||||
than where they were detected in past versions of PLY.
|
||||
Thus, it's possible that your error handling could change
|
||||
slightly on the same inputs. defaulted_states do not change
|
||||
the overall parsing of the input (i.e., the same grammar is
|
||||
accepted).
|
||||
|
||||
If for some reason, you need to disable defaulted states,
|
||||
you can do this:
|
||||
|
||||
parser = yacc.yacc()
|
||||
parser.defaulted_states = {}
|
||||
|
||||
04/21/15: beazley
|
||||
Fixed debug logging in the parser. It wasn't properly reporting goto states
|
||||
on grammar rule reductions.
|
||||
|
||||
04/20/15: beazley
|
||||
Added actions to be defined to character literals (Issue #32). For example:
|
||||
|
||||
literals = [ '{', '}' ]
|
||||
|
||||
def t_lbrace(t):
|
||||
r'\{'
|
||||
# Some action
|
||||
t.type = '{'
|
||||
return t
|
||||
|
||||
def t_rbrace(t):
|
||||
r'\}'
|
||||
# Some action
|
||||
t.type = '}'
|
||||
return t
|
||||
|
||||
04/19/15: beazley
|
||||
Import of the 'parsetab.py' file is now constrained to only consider the
|
||||
directory specified by the outputdir argument to yacc(). If not supplied,
|
||||
the import will only consider the directory in which the grammar is defined.
|
||||
This should greatly reduce problems with the wrong parsetab.py file being
|
||||
imported by mistake. For example, if it's found somewhere else on the path
|
||||
by accident.
|
||||
|
||||
*** POTENTIAL INCOMPATIBILITY *** It's possible that this might break some
|
||||
packaging/deployment setup if PLY was instructed to place its parsetab.py
|
||||
in a different location. You'll have to specify a proper outputdir= argument
|
||||
to yacc() to fix this if needed.
|
||||
|
||||
04/19/15: beazley
|
||||
Changed default output directory to be the same as that in which the
|
||||
yacc grammar is defined. If your grammar is in a file 'calc.py',
|
||||
then the parsetab.py and parser.out files should be generated in the
|
||||
same directory as that file. The destination directory can be changed
|
||||
using the outputdir= argument to yacc().
|
||||
|
||||
04/19/15: beazley
|
||||
Changed the parsetab.py file signature slightly so that the parsetab won't
|
||||
regenerate if created on a different major version of Python (ie., a
|
||||
parsetab created on Python 2 will work with Python 3).
|
||||
|
||||
04/16/15: beazley
|
||||
Fixed Issue #44 call_errorfunc() should return the result of errorfunc()
|
||||
|
||||
04/16/15: beazley
|
||||
Support for versions of Python <2.7 is officially dropped. PLY may work, but
|
||||
the unit tests requires Python 2.7 or newer.
|
||||
|
||||
04/16/15: beazley
|
||||
Fixed bug related to calling yacc(start=...). PLY wasn't regenerating the
|
||||
table file correctly for this case.
|
||||
|
||||
04/16/15: beazley
|
||||
Added skipped tests for PyPy and Java. Related to use of Python's -O option.
|
||||
|
||||
05/29/13: beazley
|
||||
Added filter to make unit tests pass under 'python -3'.
|
||||
Reported by Neil Muller.
|
||||
|
||||
05/29/13: beazley
|
||||
Fixed CPP_INTEGER regex in ply/cpp.py (Issue 21).
|
||||
Reported by @vbraun.
|
||||
|
||||
05/29/13: beazley
|
||||
Fixed yacc validation bugs when from __future__ import unicode_literals
|
||||
is being used. Reported by Kenn Knowles.
|
||||
|
||||
05/29/13: beazley
|
||||
Added support for Travis-CI. Contributed by Kenn Knowles.
|
||||
|
||||
05/29/13: beazley
|
||||
Added a .gitignore file. Suggested by Kenn Knowles.
|
||||
|
||||
05/29/13: beazley
|
||||
Fixed validation problems for source files that include a
|
||||
different source code encoding specifier. Fix relies on
|
||||
the inspect module. Should work on Python 2.6 and newer.
|
||||
Not sure about older versions of Python.
|
||||
Contributed by Michael Droettboom
|
||||
|
||||
05/21/13: beazley
|
||||
Fixed unit tests for yacc to eliminate random failures due to dict hash value
|
||||
randomization in Python 3.3
|
||||
Reported by Arfrever
|
||||
|
||||
10/15/12: beazley
|
||||
Fixed comment whitespace processing bugs in ply/cpp.py.
|
||||
Reported by Alexei Pososin.
|
||||
|
||||
10/15/12: beazley
|
||||
Fixed token names in ply/ctokens.py to match rule names.
|
||||
Reported by Alexei Pososin.
|
||||
|
||||
04/26/12: beazley
|
||||
Changes to functions available in panic mode error recover. In previous versions
|
||||
of PLY, the following global functions were available for use in the p_error() rule:
|
||||
|
||||
yacc.errok() # Reset error state
|
||||
yacc.token() # Get the next token
|
||||
yacc.restart() # Reset the parsing stack
|
||||
|
||||
The use of global variables was problematic for code involving multiple parsers
|
||||
and frankly was a poor design overall. These functions have been moved to methods
|
||||
of the parser instance created by the yacc() function. You should write code like
|
||||
this:
|
||||
|
||||
def p_error(p):
|
||||
...
|
||||
parser.errok()
|
||||
|
||||
parser = yacc.yacc()
|
||||
|
||||
*** POTENTIAL INCOMPATIBILITY *** The original global functions now issue a
|
||||
DeprecationWarning.
|
||||
|
||||
04/19/12: beazley
|
||||
Fixed some problems with line and position tracking and the use of error
|
||||
symbols. If you have a grammar rule involving an error rule like this:
|
||||
|
||||
def p_assignment_bad(p):
|
||||
'''assignment : location EQUALS error SEMI'''
|
||||
...
|
||||
|
||||
You can now do line and position tracking on the error token. For example:
|
||||
|
||||
def p_assignment_bad(p):
|
||||
'''assignment : location EQUALS error SEMI'''
|
||||
start_line = p.lineno(3)
|
||||
start_pos = p.lexpos(3)
|
||||
|
||||
If the trackng=True option is supplied to parse(), you can additionally get
|
||||
spans:
|
||||
|
||||
def p_assignment_bad(p):
|
||||
'''assignment : location EQUALS error SEMI'''
|
||||
start_line, end_line = p.linespan(3)
|
||||
start_pos, end_pos = p.lexspan(3)
|
||||
|
||||
Note that error handling is still a hairy thing in PLY. This won't work
|
||||
unless your lexer is providing accurate information. Please report bugs.
|
||||
Suggested by a bug reported by Davis Herring.
|
||||
|
||||
04/18/12: beazley
|
||||
Change to doc string handling in lex module. Regex patterns are now first
|
||||
pulled from a function's .regex attribute. If that doesn't exist, then
|
||||
.doc is checked as a fallback. The @TOKEN decorator now sets the .regex
|
||||
attribute of a function instead of its doc string.
|
||||
Changed suggested by Kristoffer Ellersgaard Koch.
|
||||
|
||||
04/18/12: beazley
|
||||
Fixed issue #1: Fixed _tabversion. It should use __tabversion__ instead of __version__
|
||||
Reported by Daniele Tricoli
|
||||
|
||||
04/18/12: beazley
|
||||
Fixed issue #8: Literals empty list causes IndexError
|
||||
Reported by Walter Nissen.
|
||||
|
||||
04/18/12: beazley
|
||||
Fixed issue #12: Typo in code snippet in documentation
|
||||
Reported by florianschanda.
|
||||
|
||||
04/18/12: beazley
|
||||
Fixed issue #10: Correctly escape t_XOREQUAL pattern.
|
||||
Reported by Andy Kittner.
|
||||
|
||||
Version 3.4
|
||||
---------------------
|
||||
02/17/11: beazley
|
||||
Minor patch to make cpp.py compatible with Python 3. Note: This
|
||||
is an experimental file not currently used by the rest of PLY.
|
||||
|
||||
02/17/11: beazley
|
||||
Fixed setup.py trove classifiers to properly list PLY as
|
||||
Python 3 compatible.
|
||||
|
||||
01/02/11: beazley
|
||||
Migration of repository to github.
|
||||
|
||||
Version 3.3
|
||||
-----------------------------
|
||||
08/25/09: beazley
|
||||
Fixed issue 15 related to the set_lineno() method in yacc. Reported by
|
||||
mdsherry.
|
||||
|
||||
08/25/09: beazley
|
||||
Fixed a bug related to regular expression compilation flags not being
|
||||
properly stored in lextab.py files created by the lexer when running
|
||||
in optimize mode. Reported by Bruce Frederiksen.
|
||||
|
||||
|
||||
Version 3.2
|
||||
-----------------------------
|
||||
|
||||
8
ext/ply/MANIFEST.in
Normal file
8
ext/ply/MANIFEST.in
Normal file
@@ -0,0 +1,8 @@
|
||||
recursive-include example *
|
||||
recursive-include doc *
|
||||
recursive-include test *
|
||||
include ANNOUNCE
|
||||
include README.md
|
||||
include CHANGES
|
||||
include TODO
|
||||
global-exclude *.pyc
|
||||
23
ext/ply/PKG-INFO
Normal file
23
ext/ply/PKG-INFO
Normal file
@@ -0,0 +1,23 @@
|
||||
Metadata-Version: 1.1
|
||||
Name: ply
|
||||
Version: 3.11
|
||||
Summary: Python Lex & Yacc
|
||||
Home-page: http://www.dabeaz.com/ply/
|
||||
Author: David Beazley
|
||||
Author-email: dave@dabeaz.com
|
||||
License: BSD
|
||||
Description-Content-Type: UNKNOWN
|
||||
Description:
|
||||
PLY is yet another implementation of lex and yacc for Python. Some notable
|
||||
features include the fact that its implemented entirely in Python and it
|
||||
uses LALR(1) parsing which is efficient and well suited for larger grammars.
|
||||
|
||||
PLY provides most of the standard lex/yacc features including support for empty
|
||||
productions, precedence rules, error recovery, and support for ambiguous grammars.
|
||||
|
||||
PLY is extremely easy to use and provides very extensive error checking.
|
||||
It is compatible with both Python 2 and Python 3.
|
||||
|
||||
Platform: UNKNOWN
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
@@ -1,6 +1,8 @@
|
||||
PLY (Python Lex-Yacc) Version 3.2
|
||||
# PLY (Python Lex-Yacc) Version 3.11
|
||||
|
||||
Copyright (C) 2001-2009,
|
||||
[](https://travis-ci.org/dabeaz/ply)
|
||||
|
||||
Copyright (C) 2001-2018
|
||||
David M. Beazley (Dabeaz LLC)
|
||||
All rights reserved.
|
||||
|
||||
@@ -96,7 +98,7 @@ A simple example is found at the end of this document
|
||||
|
||||
Requirements
|
||||
============
|
||||
PLY requires the use of Python 2.2 or greater. However, you should
|
||||
PLY requires the use of Python 2.6 or greater. However, you should
|
||||
use the latest Python release if possible. It should work on just
|
||||
about any platform. PLY has been tested with both CPython and Jython.
|
||||
It also seems to work with IronPython.
|
||||
@@ -112,7 +114,11 @@ book "Compilers : Principles, Techniques, and Tools" by Aho, Sethi, and
|
||||
Ullman. The topics found in "Lex & Yacc" by Levine, Mason, and Brown
|
||||
may also be useful.
|
||||
|
||||
A Google group for PLY can be found at
|
||||
The GitHub page for PLY can be found at:
|
||||
|
||||
https://github.com/dabeaz/ply
|
||||
|
||||
An old and relatively inactive discussion group for PLY is found at:
|
||||
|
||||
http://groups.google.com/group/ply-hack
|
||||
|
||||
@@ -130,7 +136,7 @@ and testing a revised LALR(1) implementation for PLY-2.0.
|
||||
Special Note for PLY-3.0
|
||||
========================
|
||||
PLY-3.0 the first PLY release to support Python 3. However, backwards
|
||||
compatibility with Python 2.2 is still preserved. PLY provides dual
|
||||
compatibility with Python 2.6 is still preserved. PLY provides dual
|
||||
Python 2/3 compatibility by restricting its implementation to a common
|
||||
subset of basic language features. You should not convert PLY using
|
||||
2to3--it is not necessary and may in fact break the implementation.
|
||||
@@ -141,109 +147,109 @@ Example
|
||||
Here is a simple example showing a PLY implementation of a calculator
|
||||
with variables.
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# calc.py
|
||||
#
|
||||
# A simple calculator with variables.
|
||||
# -----------------------------------------------------------------------------
|
||||
# -----------------------------------------------------------------------------
|
||||
# calc.py
|
||||
#
|
||||
# A simple calculator with variables.
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
t.value = int(t.value)
|
||||
return t
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
t.value = int(t.value)
|
||||
return t
|
||||
|
||||
# Ignored characters
|
||||
t_ignore = " \t"
|
||||
# Ignored characters
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print "Illegal character '%s'" % t.value[0]
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
# Precedence rules for the arithmetic operators
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# dictionary of names (for storing variables)
|
||||
names = { }
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[p[1]] = p[3]
|
||||
# Precedence rules for the arithmetic operators
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
def p_statement_expr(p):
|
||||
'statement : expression'
|
||||
print p[1]
|
||||
# dictionary of names (for storing variables)
|
||||
names = { }
|
||||
|
||||
def p_expression_binop(p):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
def p_statement_assign(p):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[p[1]] = p[3]
|
||||
|
||||
def p_expression_uminus(p):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
p[0] = -p[2]
|
||||
def p_statement_expr(p):
|
||||
'statement : expression'
|
||||
print(p[1])
|
||||
|
||||
def p_expression_group(p):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
p[0] = p[2]
|
||||
def p_expression_binop(p):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
|
||||
def p_expression_number(p):
|
||||
'expression : NUMBER'
|
||||
p[0] = p[1]
|
||||
def p_expression_uminus(p):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
p[0] = -p[2]
|
||||
|
||||
def p_expression_name(p):
|
||||
'expression : NAME'
|
||||
try:
|
||||
p[0] = names[p[1]]
|
||||
except LookupError:
|
||||
print "Undefined name '%s'" % p[1]
|
||||
p[0] = 0
|
||||
def p_expression_group(p):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
p[0] = p[2]
|
||||
|
||||
def p_error(p):
|
||||
print "Syntax error at '%s'" % p.value
|
||||
def p_expression_number(p):
|
||||
'expression : NUMBER'
|
||||
p[0] = p[1]
|
||||
|
||||
import ply.yacc as yacc
|
||||
yacc.yacc()
|
||||
def p_expression_name(p):
|
||||
'expression : NAME'
|
||||
try:
|
||||
p[0] = names[p[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % p[1])
|
||||
p[0] = 0
|
||||
|
||||
while 1:
|
||||
try:
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
yacc.parse(s)
|
||||
def p_error(p):
|
||||
print("Syntax error at '%s'" % p.value)
|
||||
|
||||
import ply.yacc as yacc
|
||||
yacc.yacc()
|
||||
|
||||
while True:
|
||||
try:
|
||||
s = raw_input('calc > ') # use input() on Python 3
|
||||
except EOFError:
|
||||
break
|
||||
yacc.parse(s)
|
||||
|
||||
|
||||
Bug Reports and Patches
|
||||
@@ -252,12 +258,10 @@ My goal with PLY is to simply have a decent lex/yacc implementation
|
||||
for Python. As a general rule, I don't spend huge amounts of time
|
||||
working on it unless I receive very specific bug reports and/or
|
||||
patches to fix problems. I also try to incorporate submitted feature
|
||||
requests and enhancements into each new version. To contact me about
|
||||
bugs and/or new features, please send email to dave@dabeaz.com.
|
||||
|
||||
In addition there is a Google group for discussing PLY related issues at
|
||||
|
||||
http://groups.google.com/group/ply-hack
|
||||
requests and enhancements into each new version. Please visit the PLY
|
||||
github page at https://github.com/dabeaz/ply to submit issues and pull
|
||||
requests. To contact me about bugs and/or new features, please send
|
||||
email to dave@dabeaz.com.
|
||||
|
||||
-- Dave
|
||||
|
||||
@@ -12,7 +12,7 @@ dave@dabeaz.com<br>
|
||||
</b>
|
||||
|
||||
<p>
|
||||
<b>PLY Version: 3.0</b>
|
||||
<b>PLY Version: 3.11</b>
|
||||
<p>
|
||||
|
||||
<!-- INDEX -->
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,7 @@
|
||||
#
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
@@ -17,7 +17,8 @@ import basinterp
|
||||
if len(sys.argv) == 2:
|
||||
data = open(sys.argv[1]).read()
|
||||
prog = basparse.parse(data)
|
||||
if not prog: raise SystemExit
|
||||
if not prog:
|
||||
raise SystemExit
|
||||
b = basinterp.BasicInterpreter(prog)
|
||||
try:
|
||||
b.run()
|
||||
@@ -39,33 +40,26 @@ while 1:
|
||||
line = raw_input("[BASIC] ")
|
||||
except EOFError:
|
||||
raise SystemExit
|
||||
if not line: continue
|
||||
if not line:
|
||||
continue
|
||||
line += "\n"
|
||||
prog = basparse.parse(line)
|
||||
if not prog: continue
|
||||
if not prog:
|
||||
continue
|
||||
|
||||
keys = list(prog)
|
||||
if keys[0] > 0:
|
||||
b.add_statements(prog)
|
||||
b.add_statements(prog)
|
||||
else:
|
||||
stat = prog[keys[0]]
|
||||
if stat[0] == 'RUN':
|
||||
try:
|
||||
b.run()
|
||||
except RuntimeError:
|
||||
pass
|
||||
elif stat[0] == 'LIST':
|
||||
b.list()
|
||||
elif stat[0] == 'BLANK':
|
||||
b.del_line(stat[1])
|
||||
elif stat[0] == 'NEW':
|
||||
b.new()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
stat = prog[keys[0]]
|
||||
if stat[0] == 'RUN':
|
||||
try:
|
||||
b.run()
|
||||
except RuntimeError:
|
||||
pass
|
||||
elif stat[0] == 'LIST':
|
||||
b.list()
|
||||
elif stat[0] == 'BLANK':
|
||||
b.del_line(stat[1])
|
||||
elif stat[0] == 'NEW':
|
||||
b.new()
|
||||
|
||||
@@ -3,72 +3,59 @@
|
||||
from ply import *
|
||||
|
||||
keywords = (
|
||||
'LET','READ','DATA','PRINT','GOTO','IF','THEN','FOR','NEXT','TO','STEP',
|
||||
'END','STOP','DEF','GOSUB','DIM','REM','RETURN','RUN','LIST','NEW',
|
||||
'LET', 'READ', 'DATA', 'PRINT', 'GOTO', 'IF', 'THEN', 'FOR', 'NEXT', 'TO', 'STEP',
|
||||
'END', 'STOP', 'DEF', 'GOSUB', 'DIM', 'REM', 'RETURN', 'RUN', 'LIST', 'NEW',
|
||||
)
|
||||
|
||||
tokens = keywords + (
|
||||
'EQUALS','PLUS','MINUS','TIMES','DIVIDE','POWER',
|
||||
'LPAREN','RPAREN','LT','LE','GT','GE','NE',
|
||||
'COMMA','SEMI', 'INTEGER','FLOAT', 'STRING',
|
||||
'ID','NEWLINE'
|
||||
'EQUALS', 'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'POWER',
|
||||
'LPAREN', 'RPAREN', 'LT', 'LE', 'GT', 'GE', 'NE',
|
||||
'COMMA', 'SEMI', 'INTEGER', 'FLOAT', 'STRING',
|
||||
'ID', 'NEWLINE'
|
||||
)
|
||||
|
||||
t_ignore = ' \t'
|
||||
|
||||
|
||||
def t_REM(t):
|
||||
r'REM .*'
|
||||
return t
|
||||
|
||||
|
||||
def t_ID(t):
|
||||
r'[A-Z][A-Z0-9]*'
|
||||
if t.value in keywords:
|
||||
t.type = t.value
|
||||
return t
|
||||
|
||||
t_EQUALS = r'='
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_POWER = r'\^'
|
||||
t_DIVIDE = r'/'
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_LT = r'<'
|
||||
t_LE = r'<='
|
||||
t_GT = r'>'
|
||||
t_GE = r'>='
|
||||
t_NE = r'<>'
|
||||
t_COMMA = r'\,'
|
||||
t_SEMI = r';'
|
||||
t_INTEGER = r'\d+'
|
||||
t_FLOAT = r'((\d*\.\d+)(E[\+-]?\d+)?|([1-9]\d*E[\+-]?\d+))'
|
||||
t_STRING = r'\".*?\"'
|
||||
|
||||
t_EQUALS = r'='
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_POWER = r'\^'
|
||||
t_DIVIDE = r'/'
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_LT = r'<'
|
||||
t_LE = r'<='
|
||||
t_GT = r'>'
|
||||
t_GE = r'>='
|
||||
t_NE = r'<>'
|
||||
t_COMMA = r'\,'
|
||||
t_SEMI = r';'
|
||||
t_INTEGER = r'\d+'
|
||||
t_FLOAT = r'((\d*\.\d+)(E[\+-]?\d+)?|([1-9]\d*E[\+-]?\d+))'
|
||||
t_STRING = r'\".*?\"'
|
||||
|
||||
|
||||
def t_NEWLINE(t):
|
||||
r'\n'
|
||||
t.lexer.lineno += 1
|
||||
return t
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character %s" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
lex.lex(debug=0)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -2,16 +2,16 @@
|
||||
#
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
|
||||
import logging
|
||||
logging.basicConfig(
|
||||
level = logging.INFO,
|
||||
filename = "parselog.txt",
|
||||
filemode = "w"
|
||||
level=logging.INFO,
|
||||
filename="parselog.txt",
|
||||
filemode="w"
|
||||
)
|
||||
log = logging.getLogger()
|
||||
|
||||
@@ -24,8 +24,9 @@ import basinterp
|
||||
# interactive mode below
|
||||
if len(sys.argv) == 2:
|
||||
data = open(sys.argv[1]).read()
|
||||
prog = basparse.parse(data,debug=log)
|
||||
if not prog: raise SystemExit
|
||||
prog = basparse.parse(data, debug=log)
|
||||
if not prog:
|
||||
raise SystemExit
|
||||
b = basinterp.BasicInterpreter(prog)
|
||||
try:
|
||||
b.run()
|
||||
@@ -47,33 +48,26 @@ while 1:
|
||||
line = raw_input("[BASIC] ")
|
||||
except EOFError:
|
||||
raise SystemExit
|
||||
if not line: continue
|
||||
if not line:
|
||||
continue
|
||||
line += "\n"
|
||||
prog = basparse.parse(line,debug=log)
|
||||
if not prog: continue
|
||||
prog = basparse.parse(line, debug=log)
|
||||
if not prog:
|
||||
continue
|
||||
|
||||
keys = list(prog)
|
||||
if keys[0] > 0:
|
||||
b.add_statements(prog)
|
||||
b.add_statements(prog)
|
||||
else:
|
||||
stat = prog[keys[0]]
|
||||
if stat[0] == 'RUN':
|
||||
try:
|
||||
b.run()
|
||||
except RuntimeError:
|
||||
pass
|
||||
elif stat[0] == 'LIST':
|
||||
b.list()
|
||||
elif stat[0] == 'BLANK':
|
||||
b.del_line(stat[1])
|
||||
elif stat[0] == 'NEW':
|
||||
b.new()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
stat = prog[keys[0]]
|
||||
if stat[0] == 'RUN':
|
||||
try:
|
||||
b.run()
|
||||
except RuntimeError:
|
||||
pass
|
||||
elif stat[0] == 'LIST':
|
||||
b.list()
|
||||
elif stat[0] == 'BLANK':
|
||||
b.del_line(stat[1])
|
||||
elif stat[0] == 'NEW':
|
||||
b.new()
|
||||
|
||||
@@ -5,141 +5,167 @@ import sys
|
||||
import math
|
||||
import random
|
||||
|
||||
|
||||
class BasicInterpreter:
|
||||
|
||||
# Initialize the interpreter. prog is a dictionary
|
||||
# containing (line,statement) mappings
|
||||
def __init__(self,prog):
|
||||
self.prog = prog
|
||||
def __init__(self, prog):
|
||||
self.prog = prog
|
||||
|
||||
self.functions = { # Built-in function table
|
||||
'SIN' : lambda z: math.sin(self.eval(z)),
|
||||
'COS' : lambda z: math.cos(self.eval(z)),
|
||||
'TAN' : lambda z: math.tan(self.eval(z)),
|
||||
'ATN' : lambda z: math.atan(self.eval(z)),
|
||||
'EXP' : lambda z: math.exp(self.eval(z)),
|
||||
'ABS' : lambda z: abs(self.eval(z)),
|
||||
'LOG' : lambda z: math.log(self.eval(z)),
|
||||
'SQR' : lambda z: math.sqrt(self.eval(z)),
|
||||
'INT' : lambda z: int(self.eval(z)),
|
||||
'RND' : lambda z: random.random()
|
||||
}
|
||||
self.functions = { # Built-in function table
|
||||
'SIN': lambda z: math.sin(self.eval(z)),
|
||||
'COS': lambda z: math.cos(self.eval(z)),
|
||||
'TAN': lambda z: math.tan(self.eval(z)),
|
||||
'ATN': lambda z: math.atan(self.eval(z)),
|
||||
'EXP': lambda z: math.exp(self.eval(z)),
|
||||
'ABS': lambda z: abs(self.eval(z)),
|
||||
'LOG': lambda z: math.log(self.eval(z)),
|
||||
'SQR': lambda z: math.sqrt(self.eval(z)),
|
||||
'INT': lambda z: int(self.eval(z)),
|
||||
'RND': lambda z: random.random()
|
||||
}
|
||||
|
||||
# Collect all data statements
|
||||
def collect_data(self):
|
||||
self.data = []
|
||||
for lineno in self.stat:
|
||||
if self.prog[lineno][0] == 'DATA':
|
||||
self.data = self.data + self.prog[lineno][1]
|
||||
self.dc = 0 # Initialize the data counter
|
||||
self.data = []
|
||||
for lineno in self.stat:
|
||||
if self.prog[lineno][0] == 'DATA':
|
||||
self.data = self.data + self.prog[lineno][1]
|
||||
self.dc = 0 # Initialize the data counter
|
||||
|
||||
# Check for end statements
|
||||
def check_end(self):
|
||||
has_end = 0
|
||||
for lineno in self.stat:
|
||||
if self.prog[lineno][0] == 'END' and not has_end:
|
||||
has_end = lineno
|
||||
if not has_end:
|
||||
print("NO END INSTRUCTION")
|
||||
self.error = 1
|
||||
return
|
||||
if has_end != lineno:
|
||||
print("END IS NOT LAST")
|
||||
self.error = 1
|
||||
has_end = 0
|
||||
for lineno in self.stat:
|
||||
if self.prog[lineno][0] == 'END' and not has_end:
|
||||
has_end = lineno
|
||||
if not has_end:
|
||||
print("NO END INSTRUCTION")
|
||||
self.error = 1
|
||||
return
|
||||
if has_end != lineno:
|
||||
print("END IS NOT LAST")
|
||||
self.error = 1
|
||||
|
||||
# Check loops
|
||||
def check_loops(self):
|
||||
for pc in range(len(self.stat)):
|
||||
lineno = self.stat[pc]
|
||||
if self.prog[lineno][0] == 'FOR':
|
||||
forinst = self.prog[lineno]
|
||||
loopvar = forinst[1]
|
||||
for i in range(pc+1,len(self.stat)):
|
||||
if self.prog[self.stat[i]][0] == 'NEXT':
|
||||
nextvar = self.prog[self.stat[i]][1]
|
||||
if nextvar != loopvar: continue
|
||||
self.loopend[pc] = i
|
||||
break
|
||||
else:
|
||||
print("FOR WITHOUT NEXT AT LINE %s" % self.stat[pc])
|
||||
self.error = 1
|
||||
|
||||
# Evaluate an expression
|
||||
def eval(self,expr):
|
||||
etype = expr[0]
|
||||
if etype == 'NUM': return expr[1]
|
||||
elif etype == 'GROUP': return self.eval(expr[1])
|
||||
elif etype == 'UNARY':
|
||||
if expr[1] == '-': return -self.eval(expr[2])
|
||||
elif etype == 'BINOP':
|
||||
if expr[1] == '+': return self.eval(expr[2])+self.eval(expr[3])
|
||||
elif expr[1] == '-': return self.eval(expr[2])-self.eval(expr[3])
|
||||
elif expr[1] == '*': return self.eval(expr[2])*self.eval(expr[3])
|
||||
elif expr[1] == '/': return float(self.eval(expr[2]))/self.eval(expr[3])
|
||||
elif expr[1] == '^': return abs(self.eval(expr[2]))**self.eval(expr[3])
|
||||
elif etype == 'VAR':
|
||||
var,dim1,dim2 = expr[1]
|
||||
if not dim1 and not dim2:
|
||||
if var in self.vars:
|
||||
return self.vars[var]
|
||||
else:
|
||||
print("UNDEFINED VARIABLE %s AT LINE %s" % (var, self.stat[self.pc]))
|
||||
raise RuntimeError
|
||||
# May be a list lookup or a function evaluation
|
||||
if dim1 and not dim2:
|
||||
if var in self.functions:
|
||||
# A function
|
||||
return self.functions[var](dim1)
|
||||
for pc in range(len(self.stat)):
|
||||
lineno = self.stat[pc]
|
||||
if self.prog[lineno][0] == 'FOR':
|
||||
forinst = self.prog[lineno]
|
||||
loopvar = forinst[1]
|
||||
for i in range(pc + 1, len(self.stat)):
|
||||
if self.prog[self.stat[i]][0] == 'NEXT':
|
||||
nextvar = self.prog[self.stat[i]][1]
|
||||
if nextvar != loopvar:
|
||||
continue
|
||||
self.loopend[pc] = i
|
||||
break
|
||||
else:
|
||||
# A list evaluation
|
||||
if var in self.lists:
|
||||
dim1val = self.eval(dim1)
|
||||
if dim1val < 1 or dim1val > len(self.lists[var]):
|
||||
print("LIST INDEX OUT OF BOUNDS AT LINE %s" % self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
return self.lists[var][dim1val-1]
|
||||
if dim1 and dim2:
|
||||
if var in self.tables:
|
||||
dim1val = self.eval(dim1)
|
||||
dim2val = self.eval(dim2)
|
||||
if dim1val < 1 or dim1val > len(self.tables[var]) or dim2val < 1 or dim2val > len(self.tables[var][0]):
|
||||
print("TABLE INDEX OUT OUT BOUNDS AT LINE %s" % self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
return self.tables[var][dim1val-1][dim2val-1]
|
||||
print("UNDEFINED VARIABLE %s AT LINE %s" % (var, self.stat[self.pc]))
|
||||
raise RuntimeError
|
||||
print("FOR WITHOUT NEXT AT LINE %s" % self.stat[pc])
|
||||
self.error = 1
|
||||
|
||||
# Evaluate an expression
|
||||
def eval(self, expr):
|
||||
etype = expr[0]
|
||||
if etype == 'NUM':
|
||||
return expr[1]
|
||||
elif etype == 'GROUP':
|
||||
return self.eval(expr[1])
|
||||
elif etype == 'UNARY':
|
||||
if expr[1] == '-':
|
||||
return -self.eval(expr[2])
|
||||
elif etype == 'BINOP':
|
||||
if expr[1] == '+':
|
||||
return self.eval(expr[2]) + self.eval(expr[3])
|
||||
elif expr[1] == '-':
|
||||
return self.eval(expr[2]) - self.eval(expr[3])
|
||||
elif expr[1] == '*':
|
||||
return self.eval(expr[2]) * self.eval(expr[3])
|
||||
elif expr[1] == '/':
|
||||
return float(self.eval(expr[2])) / self.eval(expr[3])
|
||||
elif expr[1] == '^':
|
||||
return abs(self.eval(expr[2]))**self.eval(expr[3])
|
||||
elif etype == 'VAR':
|
||||
var, dim1, dim2 = expr[1]
|
||||
if not dim1 and not dim2:
|
||||
if var in self.vars:
|
||||
return self.vars[var]
|
||||
else:
|
||||
print("UNDEFINED VARIABLE %s AT LINE %s" %
|
||||
(var, self.stat[self.pc]))
|
||||
raise RuntimeError
|
||||
# May be a list lookup or a function evaluation
|
||||
if dim1 and not dim2:
|
||||
if var in self.functions:
|
||||
# A function
|
||||
return self.functions[var](dim1)
|
||||
else:
|
||||
# A list evaluation
|
||||
if var in self.lists:
|
||||
dim1val = self.eval(dim1)
|
||||
if dim1val < 1 or dim1val > len(self.lists[var]):
|
||||
print("LIST INDEX OUT OF BOUNDS AT LINE %s" %
|
||||
self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
return self.lists[var][dim1val - 1]
|
||||
if dim1 and dim2:
|
||||
if var in self.tables:
|
||||
dim1val = self.eval(dim1)
|
||||
dim2val = self.eval(dim2)
|
||||
if dim1val < 1 or dim1val > len(self.tables[var]) or dim2val < 1 or dim2val > len(self.tables[var][0]):
|
||||
print("TABLE INDEX OUT OUT BOUNDS AT LINE %s" %
|
||||
self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
return self.tables[var][dim1val - 1][dim2val - 1]
|
||||
print("UNDEFINED VARIABLE %s AT LINE %s" %
|
||||
(var, self.stat[self.pc]))
|
||||
raise RuntimeError
|
||||
|
||||
# Evaluate a relational expression
|
||||
def releval(self,expr):
|
||||
etype = expr[1]
|
||||
lhs = self.eval(expr[2])
|
||||
rhs = self.eval(expr[3])
|
||||
if etype == '<':
|
||||
if lhs < rhs: return 1
|
||||
else: return 0
|
||||
def releval(self, expr):
|
||||
etype = expr[1]
|
||||
lhs = self.eval(expr[2])
|
||||
rhs = self.eval(expr[3])
|
||||
if etype == '<':
|
||||
if lhs < rhs:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
elif etype == '<=':
|
||||
if lhs <= rhs: return 1
|
||||
else: return 0
|
||||
elif etype == '<=':
|
||||
if lhs <= rhs:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
elif etype == '>':
|
||||
if lhs > rhs: return 1
|
||||
else: return 0
|
||||
elif etype == '>':
|
||||
if lhs > rhs:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
elif etype == '>=':
|
||||
if lhs >= rhs: return 1
|
||||
else: return 0
|
||||
elif etype == '>=':
|
||||
if lhs >= rhs:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
elif etype == '=':
|
||||
if lhs == rhs: return 1
|
||||
else: return 0
|
||||
elif etype == '=':
|
||||
if lhs == rhs:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
elif etype == '<>':
|
||||
if lhs != rhs: return 1
|
||||
else: return 0
|
||||
elif etype == '<>':
|
||||
if lhs != rhs:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
# Assignment
|
||||
def assign(self,target,value):
|
||||
def assign(self, target, value):
|
||||
var, dim1, dim2 = target
|
||||
if not dim1 and not dim2:
|
||||
self.vars[var] = self.eval(value)
|
||||
@@ -147,42 +173,44 @@ class BasicInterpreter:
|
||||
# List assignment
|
||||
dim1val = self.eval(dim1)
|
||||
if not var in self.lists:
|
||||
self.lists[var] = [0]*10
|
||||
self.lists[var] = [0] * 10
|
||||
|
||||
if dim1val > len(self.lists[var]):
|
||||
print ("DIMENSION TOO LARGE AT LINE %s" % self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
self.lists[var][dim1val-1] = self.eval(value)
|
||||
print ("DIMENSION TOO LARGE AT LINE %s" % self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
self.lists[var][dim1val - 1] = self.eval(value)
|
||||
elif dim1 and dim2:
|
||||
dim1val = self.eval(dim1)
|
||||
dim2val = self.eval(dim2)
|
||||
if not var in self.tables:
|
||||
temp = [0]*10
|
||||
v = []
|
||||
for i in range(10): v.append(temp[:])
|
||||
self.tables[var] = v
|
||||
temp = [0] * 10
|
||||
v = []
|
||||
for i in range(10):
|
||||
v.append(temp[:])
|
||||
self.tables[var] = v
|
||||
# Variable already exists
|
||||
if dim1val > len(self.tables[var]) or dim2val > len(self.tables[var][0]):
|
||||
print("DIMENSION TOO LARGE AT LINE %s" % self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
self.tables[var][dim1val-1][dim2val-1] = self.eval(value)
|
||||
print("DIMENSION TOO LARGE AT LINE %s" % self.stat[self.pc])
|
||||
raise RuntimeError
|
||||
self.tables[var][dim1val - 1][dim2val - 1] = self.eval(value)
|
||||
|
||||
# Change the current line number
|
||||
def goto(self,linenum):
|
||||
if not linenum in self.prog:
|
||||
print("UNDEFINED LINE NUMBER %d AT LINE %d" % (linenum, self.stat[self.pc]))
|
||||
raise RuntimeError
|
||||
self.pc = self.stat.index(linenum)
|
||||
def goto(self, linenum):
|
||||
if not linenum in self.prog:
|
||||
print("UNDEFINED LINE NUMBER %d AT LINE %d" %
|
||||
(linenum, self.stat[self.pc]))
|
||||
raise RuntimeError
|
||||
self.pc = self.stat.index(linenum)
|
||||
|
||||
# Run it
|
||||
def run(self):
|
||||
self.vars = { } # All variables
|
||||
self.lists = { } # List variables
|
||||
self.tables = { } # Tables
|
||||
self.loops = [ ] # Currently active loops
|
||||
self.loopend= { } # Mapping saying where loops end
|
||||
self.gosub = None # Gosub return point (if any)
|
||||
self.error = 0 # Indicates program error
|
||||
self.vars = {} # All variables
|
||||
self.lists = {} # List variables
|
||||
self.tables = {} # Tables
|
||||
self.loops = [] # Currently active loops
|
||||
self.loopend = {} # Mapping saying where loops end
|
||||
self.gosub = None # Gosub return point (if any)
|
||||
self.error = 0 # Indicates program error
|
||||
|
||||
self.stat = list(self.prog) # Ordered list of all line numbers
|
||||
self.stat.sort()
|
||||
@@ -194,248 +222,275 @@ class BasicInterpreter:
|
||||
self.check_end()
|
||||
self.check_loops()
|
||||
|
||||
if self.error: raise RuntimeError
|
||||
if self.error:
|
||||
raise RuntimeError
|
||||
|
||||
while 1:
|
||||
line = self.stat[self.pc]
|
||||
line = self.stat[self.pc]
|
||||
instr = self.prog[line]
|
||||
|
||||
|
||||
op = instr[0]
|
||||
|
||||
# END and STOP statements
|
||||
if op == 'END' or op == 'STOP':
|
||||
break # We're done
|
||||
break # We're done
|
||||
|
||||
# GOTO statement
|
||||
elif op == 'GOTO':
|
||||
newline = instr[1]
|
||||
self.goto(newline)
|
||||
continue
|
||||
newline = instr[1]
|
||||
self.goto(newline)
|
||||
continue
|
||||
|
||||
# PRINT statement
|
||||
elif op == 'PRINT':
|
||||
plist = instr[1]
|
||||
out = ""
|
||||
for label,val in plist:
|
||||
if out:
|
||||
out += ' '*(15 - (len(out) % 15))
|
||||
out += label
|
||||
if val:
|
||||
if label: out += " "
|
||||
eval = self.eval(val)
|
||||
out += str(eval)
|
||||
sys.stdout.write(out)
|
||||
end = instr[2]
|
||||
if not (end == ',' or end == ';'):
|
||||
sys.stdout.write("\n")
|
||||
if end == ',': sys.stdout.write(" "*(15-(len(out) % 15)))
|
||||
if end == ';': sys.stdout.write(" "*(3-(len(out) % 3)))
|
||||
|
||||
plist = instr[1]
|
||||
out = ""
|
||||
for label, val in plist:
|
||||
if out:
|
||||
out += ' ' * (15 - (len(out) % 15))
|
||||
out += label
|
||||
if val:
|
||||
if label:
|
||||
out += " "
|
||||
eval = self.eval(val)
|
||||
out += str(eval)
|
||||
sys.stdout.write(out)
|
||||
end = instr[2]
|
||||
if not (end == ',' or end == ';'):
|
||||
sys.stdout.write("\n")
|
||||
if end == ',':
|
||||
sys.stdout.write(" " * (15 - (len(out) % 15)))
|
||||
if end == ';':
|
||||
sys.stdout.write(" " * (3 - (len(out) % 3)))
|
||||
|
||||
# LET statement
|
||||
elif op == 'LET':
|
||||
target = instr[1]
|
||||
value = instr[2]
|
||||
self.assign(target,value)
|
||||
target = instr[1]
|
||||
value = instr[2]
|
||||
self.assign(target, value)
|
||||
|
||||
# READ statement
|
||||
elif op == 'READ':
|
||||
for target in instr[1]:
|
||||
if self.dc < len(self.data):
|
||||
value = ('NUM',self.data[self.dc])
|
||||
self.assign(target,value)
|
||||
self.dc += 1
|
||||
else:
|
||||
# No more data. Program ends
|
||||
return
|
||||
for target in instr[1]:
|
||||
if self.dc < len(self.data):
|
||||
value = ('NUM', self.data[self.dc])
|
||||
self.assign(target, value)
|
||||
self.dc += 1
|
||||
else:
|
||||
# No more data. Program ends
|
||||
return
|
||||
elif op == 'IF':
|
||||
relop = instr[1]
|
||||
newline = instr[2]
|
||||
if (self.releval(relop)):
|
||||
self.goto(newline)
|
||||
continue
|
||||
relop = instr[1]
|
||||
newline = instr[2]
|
||||
if (self.releval(relop)):
|
||||
self.goto(newline)
|
||||
continue
|
||||
|
||||
elif op == 'FOR':
|
||||
loopvar = instr[1]
|
||||
initval = instr[2]
|
||||
finval = instr[3]
|
||||
stepval = instr[4]
|
||||
|
||||
# Check to see if this is a new loop
|
||||
if not self.loops or self.loops[-1][0] != self.pc:
|
||||
# Looks like a new loop. Make the initial assignment
|
||||
newvalue = initval
|
||||
self.assign((loopvar,None,None),initval)
|
||||
if not stepval: stepval = ('NUM',1)
|
||||
stepval = self.eval(stepval) # Evaluate step here
|
||||
self.loops.append((self.pc,stepval))
|
||||
else:
|
||||
# It's a repeat of the previous loop
|
||||
# Update the value of the loop variable according to the step
|
||||
stepval = ('NUM',self.loops[-1][1])
|
||||
newvalue = ('BINOP','+',('VAR',(loopvar,None,None)),stepval)
|
||||
loopvar = instr[1]
|
||||
initval = instr[2]
|
||||
finval = instr[3]
|
||||
stepval = instr[4]
|
||||
|
||||
if self.loops[-1][1] < 0: relop = '>='
|
||||
else: relop = '<='
|
||||
if not self.releval(('RELOP',relop,newvalue,finval)):
|
||||
# Loop is done. Jump to the NEXT
|
||||
self.pc = self.loopend[self.pc]
|
||||
self.loops.pop()
|
||||
else:
|
||||
self.assign((loopvar,None,None),newvalue)
|
||||
# Check to see if this is a new loop
|
||||
if not self.loops or self.loops[-1][0] != self.pc:
|
||||
# Looks like a new loop. Make the initial assignment
|
||||
newvalue = initval
|
||||
self.assign((loopvar, None, None), initval)
|
||||
if not stepval:
|
||||
stepval = ('NUM', 1)
|
||||
stepval = self.eval(stepval) # Evaluate step here
|
||||
self.loops.append((self.pc, stepval))
|
||||
else:
|
||||
# It's a repeat of the previous loop
|
||||
# Update the value of the loop variable according to the
|
||||
# step
|
||||
stepval = ('NUM', self.loops[-1][1])
|
||||
newvalue = (
|
||||
'BINOP', '+', ('VAR', (loopvar, None, None)), stepval)
|
||||
|
||||
if self.loops[-1][1] < 0:
|
||||
relop = '>='
|
||||
else:
|
||||
relop = '<='
|
||||
if not self.releval(('RELOP', relop, newvalue, finval)):
|
||||
# Loop is done. Jump to the NEXT
|
||||
self.pc = self.loopend[self.pc]
|
||||
self.loops.pop()
|
||||
else:
|
||||
self.assign((loopvar, None, None), newvalue)
|
||||
|
||||
elif op == 'NEXT':
|
||||
if not self.loops:
|
||||
print("NEXT WITHOUT FOR AT LINE %s" % line)
|
||||
return
|
||||
|
||||
nextvar = instr[1]
|
||||
self.pc = self.loops[-1][0]
|
||||
loopinst = self.prog[self.stat[self.pc]]
|
||||
forvar = loopinst[1]
|
||||
if nextvar != forvar:
|
||||
print("NEXT DOESN'T MATCH FOR AT LINE %s" % line)
|
||||
return
|
||||
continue
|
||||
if not self.loops:
|
||||
print("NEXT WITHOUT FOR AT LINE %s" % line)
|
||||
return
|
||||
|
||||
nextvar = instr[1]
|
||||
self.pc = self.loops[-1][0]
|
||||
loopinst = self.prog[self.stat[self.pc]]
|
||||
forvar = loopinst[1]
|
||||
if nextvar != forvar:
|
||||
print("NEXT DOESN'T MATCH FOR AT LINE %s" % line)
|
||||
return
|
||||
continue
|
||||
elif op == 'GOSUB':
|
||||
newline = instr[1]
|
||||
if self.gosub:
|
||||
print("ALREADY IN A SUBROUTINE AT LINE %s" % line)
|
||||
return
|
||||
self.gosub = self.stat[self.pc]
|
||||
self.goto(newline)
|
||||
continue
|
||||
newline = instr[1]
|
||||
if self.gosub:
|
||||
print("ALREADY IN A SUBROUTINE AT LINE %s" % line)
|
||||
return
|
||||
self.gosub = self.stat[self.pc]
|
||||
self.goto(newline)
|
||||
continue
|
||||
|
||||
elif op == 'RETURN':
|
||||
if not self.gosub:
|
||||
print("RETURN WITHOUT A GOSUB AT LINE %s" % line)
|
||||
return
|
||||
self.goto(self.gosub)
|
||||
self.gosub = None
|
||||
if not self.gosub:
|
||||
print("RETURN WITHOUT A GOSUB AT LINE %s" % line)
|
||||
return
|
||||
self.goto(self.gosub)
|
||||
self.gosub = None
|
||||
|
||||
elif op == 'FUNC':
|
||||
fname = instr[1]
|
||||
pname = instr[2]
|
||||
expr = instr[3]
|
||||
def eval_func(pvalue,name=pname,self=self,expr=expr):
|
||||
self.assign((pname,None,None),pvalue)
|
||||
return self.eval(expr)
|
||||
self.functions[fname] = eval_func
|
||||
fname = instr[1]
|
||||
pname = instr[2]
|
||||
expr = instr[3]
|
||||
|
||||
def eval_func(pvalue, name=pname, self=self, expr=expr):
|
||||
self.assign((pname, None, None), pvalue)
|
||||
return self.eval(expr)
|
||||
self.functions[fname] = eval_func
|
||||
|
||||
elif op == 'DIM':
|
||||
for vname,x,y in instr[1]:
|
||||
if y == 0:
|
||||
# Single dimension variable
|
||||
self.lists[vname] = [0]*x
|
||||
else:
|
||||
# Double dimension variable
|
||||
temp = [0]*y
|
||||
v = []
|
||||
for i in range(x):
|
||||
v.append(temp[:])
|
||||
self.tables[vname] = v
|
||||
for vname, x, y in instr[1]:
|
||||
if y == 0:
|
||||
# Single dimension variable
|
||||
self.lists[vname] = [0] * x
|
||||
else:
|
||||
# Double dimension variable
|
||||
temp = [0] * y
|
||||
v = []
|
||||
for i in range(x):
|
||||
v.append(temp[:])
|
||||
self.tables[vname] = v
|
||||
|
||||
self.pc += 1
|
||||
self.pc += 1
|
||||
|
||||
# Utility functions for program listing
|
||||
def expr_str(self,expr):
|
||||
def expr_str(self, expr):
|
||||
etype = expr[0]
|
||||
if etype == 'NUM': return str(expr[1])
|
||||
elif etype == 'GROUP': return "(%s)" % self.expr_str(expr[1])
|
||||
if etype == 'NUM':
|
||||
return str(expr[1])
|
||||
elif etype == 'GROUP':
|
||||
return "(%s)" % self.expr_str(expr[1])
|
||||
elif etype == 'UNARY':
|
||||
if expr[1] == '-': return "-"+str(expr[2])
|
||||
if expr[1] == '-':
|
||||
return "-" + str(expr[2])
|
||||
elif etype == 'BINOP':
|
||||
return "%s %s %s" % (self.expr_str(expr[2]),expr[1],self.expr_str(expr[3]))
|
||||
return "%s %s %s" % (self.expr_str(expr[2]), expr[1], self.expr_str(expr[3]))
|
||||
elif etype == 'VAR':
|
||||
return self.var_str(expr[1])
|
||||
return self.var_str(expr[1])
|
||||
|
||||
def relexpr_str(self,expr):
|
||||
return "%s %s %s" % (self.expr_str(expr[2]),expr[1],self.expr_str(expr[3]))
|
||||
def relexpr_str(self, expr):
|
||||
return "%s %s %s" % (self.expr_str(expr[2]), expr[1], self.expr_str(expr[3]))
|
||||
|
||||
def var_str(self,var):
|
||||
varname,dim1,dim2 = var
|
||||
if not dim1 and not dim2: return varname
|
||||
if dim1 and not dim2: return "%s(%s)" % (varname, self.expr_str(dim1))
|
||||
return "%s(%s,%s)" % (varname, self.expr_str(dim1),self.expr_str(dim2))
|
||||
def var_str(self, var):
|
||||
varname, dim1, dim2 = var
|
||||
if not dim1 and not dim2:
|
||||
return varname
|
||||
if dim1 and not dim2:
|
||||
return "%s(%s)" % (varname, self.expr_str(dim1))
|
||||
return "%s(%s,%s)" % (varname, self.expr_str(dim1), self.expr_str(dim2))
|
||||
|
||||
# Create a program listing
|
||||
def list(self):
|
||||
stat = list(self.prog) # Ordered list of all line numbers
|
||||
stat.sort()
|
||||
for line in stat:
|
||||
instr = self.prog[line]
|
||||
op = instr[0]
|
||||
if op in ['END','STOP','RETURN']:
|
||||
print("%s %s" % (line, op))
|
||||
continue
|
||||
elif op == 'REM':
|
||||
print("%s %s" % (line, instr[1]))
|
||||
elif op == 'PRINT':
|
||||
_out = "%s %s " % (line, op)
|
||||
first = 1
|
||||
for p in instr[1]:
|
||||
if not first: _out += ", "
|
||||
if p[0] and p[1]: _out += '"%s"%s' % (p[0],self.expr_str(p[1]))
|
||||
elif p[1]: _out += self.expr_str(p[1])
|
||||
else: _out += '"%s"' % (p[0],)
|
||||
first = 0
|
||||
if instr[2]: _out += instr[2]
|
||||
print(_out)
|
||||
elif op == 'LET':
|
||||
print("%s LET %s = %s" % (line,self.var_str(instr[1]),self.expr_str(instr[2])))
|
||||
elif op == 'READ':
|
||||
_out = "%s READ " % line
|
||||
first = 1
|
||||
for r in instr[1]:
|
||||
if not first: _out += ","
|
||||
_out += self.var_str(r)
|
||||
first = 0
|
||||
print(_out)
|
||||
elif op == 'IF':
|
||||
print("%s IF %s THEN %d" % (line,self.relexpr_str(instr[1]),instr[2]))
|
||||
elif op == 'GOTO' or op == 'GOSUB':
|
||||
print("%s %s %s" % (line, op, instr[1]))
|
||||
elif op == 'FOR':
|
||||
_out = "%s FOR %s = %s TO %s" % (line,instr[1],self.expr_str(instr[2]),self.expr_str(instr[3]))
|
||||
if instr[4]: _out += " STEP %s" % (self.expr_str(instr[4]))
|
||||
print(_out)
|
||||
elif op == 'NEXT':
|
||||
print("%s NEXT %s" % (line, instr[1]))
|
||||
elif op == 'FUNC':
|
||||
print("%s DEF %s(%s) = %s" % (line,instr[1],instr[2],self.expr_str(instr[3])))
|
||||
elif op == 'DIM':
|
||||
_out = "%s DIM " % line
|
||||
first = 1
|
||||
for vname,x,y in instr[1]:
|
||||
if not first: _out += ","
|
||||
first = 0
|
||||
if y == 0:
|
||||
_out += "%s(%d)" % (vname,x)
|
||||
else:
|
||||
_out += "%s(%d,%d)" % (vname,x,y)
|
||||
|
||||
print(_out)
|
||||
elif op == 'DATA':
|
||||
_out = "%s DATA " % line
|
||||
first = 1
|
||||
for v in instr[1]:
|
||||
if not first: _out += ","
|
||||
first = 0
|
||||
_out += v
|
||||
print(_out)
|
||||
stat = list(self.prog) # Ordered list of all line numbers
|
||||
stat.sort()
|
||||
for line in stat:
|
||||
instr = self.prog[line]
|
||||
op = instr[0]
|
||||
if op in ['END', 'STOP', 'RETURN']:
|
||||
print("%s %s" % (line, op))
|
||||
continue
|
||||
elif op == 'REM':
|
||||
print("%s %s" % (line, instr[1]))
|
||||
elif op == 'PRINT':
|
||||
_out = "%s %s " % (line, op)
|
||||
first = 1
|
||||
for p in instr[1]:
|
||||
if not first:
|
||||
_out += ", "
|
||||
if p[0] and p[1]:
|
||||
_out += '"%s"%s' % (p[0], self.expr_str(p[1]))
|
||||
elif p[1]:
|
||||
_out += self.expr_str(p[1])
|
||||
else:
|
||||
_out += '"%s"' % (p[0],)
|
||||
first = 0
|
||||
if instr[2]:
|
||||
_out += instr[2]
|
||||
print(_out)
|
||||
elif op == 'LET':
|
||||
print("%s LET %s = %s" %
|
||||
(line, self.var_str(instr[1]), self.expr_str(instr[2])))
|
||||
elif op == 'READ':
|
||||
_out = "%s READ " % line
|
||||
first = 1
|
||||
for r in instr[1]:
|
||||
if not first:
|
||||
_out += ","
|
||||
_out += self.var_str(r)
|
||||
first = 0
|
||||
print(_out)
|
||||
elif op == 'IF':
|
||||
print("%s IF %s THEN %d" %
|
||||
(line, self.relexpr_str(instr[1]), instr[2]))
|
||||
elif op == 'GOTO' or op == 'GOSUB':
|
||||
print("%s %s %s" % (line, op, instr[1]))
|
||||
elif op == 'FOR':
|
||||
_out = "%s FOR %s = %s TO %s" % (
|
||||
line, instr[1], self.expr_str(instr[2]), self.expr_str(instr[3]))
|
||||
if instr[4]:
|
||||
_out += " STEP %s" % (self.expr_str(instr[4]))
|
||||
print(_out)
|
||||
elif op == 'NEXT':
|
||||
print("%s NEXT %s" % (line, instr[1]))
|
||||
elif op == 'FUNC':
|
||||
print("%s DEF %s(%s) = %s" %
|
||||
(line, instr[1], instr[2], self.expr_str(instr[3])))
|
||||
elif op == 'DIM':
|
||||
_out = "%s DIM " % line
|
||||
first = 1
|
||||
for vname, x, y in instr[1]:
|
||||
if not first:
|
||||
_out += ","
|
||||
first = 0
|
||||
if y == 0:
|
||||
_out += "%s(%d)" % (vname, x)
|
||||
else:
|
||||
_out += "%s(%d,%d)" % (vname, x, y)
|
||||
|
||||
print(_out)
|
||||
elif op == 'DATA':
|
||||
_out = "%s DATA " % line
|
||||
first = 1
|
||||
for v in instr[1]:
|
||||
if not first:
|
||||
_out += ","
|
||||
first = 0
|
||||
_out += v
|
||||
print(_out)
|
||||
|
||||
# Erase the current program
|
||||
def new(self):
|
||||
self.prog = {}
|
||||
|
||||
self.prog = {}
|
||||
|
||||
# Insert statements
|
||||
def add_statements(self,prog):
|
||||
for line,stat in prog.items():
|
||||
self.prog[line] = stat
|
||||
def add_statements(self, prog):
|
||||
for line, stat in prog.items():
|
||||
self.prog[line] = stat
|
||||
|
||||
# Delete a statement
|
||||
def del_line(self,lineno):
|
||||
try:
|
||||
del self.prog[lineno]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
def del_line(self, lineno):
|
||||
try:
|
||||
del self.prog[lineno]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
@@ -7,64 +7,72 @@ import basiclex
|
||||
tokens = basiclex.tokens
|
||||
|
||||
precedence = (
|
||||
('left', 'PLUS','MINUS'),
|
||||
('left', 'TIMES','DIVIDE'),
|
||||
('left', 'POWER'),
|
||||
('right','UMINUS')
|
||||
('left', 'PLUS', 'MINUS'),
|
||||
('left', 'TIMES', 'DIVIDE'),
|
||||
('left', 'POWER'),
|
||||
('right', 'UMINUS')
|
||||
)
|
||||
|
||||
#### A BASIC program is a series of statements. We represent the program as a
|
||||
#### dictionary of tuples indexed by line number.
|
||||
# A BASIC program is a series of statements. We represent the program as a
|
||||
# dictionary of tuples indexed by line number.
|
||||
|
||||
|
||||
def p_program(p):
|
||||
'''program : program statement
|
||||
| statement'''
|
||||
|
||||
if len(p) == 2 and p[1]:
|
||||
p[0] = { }
|
||||
line,stat = p[1]
|
||||
p[0][line] = stat
|
||||
elif len(p) ==3:
|
||||
p[0] = p[1]
|
||||
if not p[0]: p[0] = { }
|
||||
if p[2]:
|
||||
line,stat = p[2]
|
||||
p[0][line] = stat
|
||||
p[0] = {}
|
||||
line, stat = p[1]
|
||||
p[0][line] = stat
|
||||
elif len(p) == 3:
|
||||
p[0] = p[1]
|
||||
if not p[0]:
|
||||
p[0] = {}
|
||||
if p[2]:
|
||||
line, stat = p[2]
|
||||
p[0][line] = stat
|
||||
|
||||
# This catch-all rule is used for any catastrophic errors. In this case,
|
||||
# we simply return nothing
|
||||
|
||||
#### This catch-all rule is used for any catastrophic errors. In this case,
|
||||
#### we simply return nothing
|
||||
|
||||
def p_program_error(p):
|
||||
'''program : error'''
|
||||
p[0] = None
|
||||
p.parser.error = 1
|
||||
|
||||
#### Format of all BASIC statements.
|
||||
# Format of all BASIC statements.
|
||||
|
||||
|
||||
def p_statement(p):
|
||||
'''statement : INTEGER command NEWLINE'''
|
||||
if isinstance(p[2],str):
|
||||
print("%s %s %s" % (p[2],"AT LINE", p[1]))
|
||||
if isinstance(p[2], str):
|
||||
print("%s %s %s" % (p[2], "AT LINE", p[1]))
|
||||
p[0] = None
|
||||
p.parser.error = 1
|
||||
else:
|
||||
lineno = int(p[1])
|
||||
p[0] = (lineno,p[2])
|
||||
p[0] = (lineno, p[2])
|
||||
|
||||
# Interactive statements.
|
||||
|
||||
#### Interactive statements.
|
||||
|
||||
def p_statement_interactive(p):
|
||||
'''statement : RUN NEWLINE
|
||||
| LIST NEWLINE
|
||||
| NEW NEWLINE'''
|
||||
p[0] = (0, (p[1],0))
|
||||
p[0] = (0, (p[1], 0))
|
||||
|
||||
# Blank line number
|
||||
|
||||
|
||||
#### Blank line number
|
||||
def p_statement_blank(p):
|
||||
'''statement : INTEGER NEWLINE'''
|
||||
p[0] = (0,('BLANK',int(p[1])))
|
||||
p[0] = (0, ('BLANK', int(p[1])))
|
||||
|
||||
# Error handling for malformed statements
|
||||
|
||||
#### Error handling for malformed statements
|
||||
|
||||
def p_statement_bad(p):
|
||||
'''statement : INTEGER error NEWLINE'''
|
||||
@@ -72,191 +80,226 @@ def p_statement_bad(p):
|
||||
p[0] = None
|
||||
p.parser.error = 1
|
||||
|
||||
#### Blank line
|
||||
# Blank line
|
||||
|
||||
|
||||
def p_statement_newline(p):
|
||||
'''statement : NEWLINE'''
|
||||
p[0] = None
|
||||
|
||||
#### LET statement
|
||||
# LET statement
|
||||
|
||||
|
||||
def p_command_let(p):
|
||||
'''command : LET variable EQUALS expr'''
|
||||
p[0] = ('LET',p[2],p[4])
|
||||
p[0] = ('LET', p[2], p[4])
|
||||
|
||||
|
||||
def p_command_let_bad(p):
|
||||
'''command : LET variable EQUALS error'''
|
||||
p[0] = "BAD EXPRESSION IN LET"
|
||||
|
||||
#### READ statement
|
||||
# READ statement
|
||||
|
||||
|
||||
def p_command_read(p):
|
||||
'''command : READ varlist'''
|
||||
p[0] = ('READ',p[2])
|
||||
p[0] = ('READ', p[2])
|
||||
|
||||
|
||||
def p_command_read_bad(p):
|
||||
'''command : READ error'''
|
||||
p[0] = "MALFORMED VARIABLE LIST IN READ"
|
||||
|
||||
#### DATA statement
|
||||
# DATA statement
|
||||
|
||||
|
||||
def p_command_data(p):
|
||||
'''command : DATA numlist'''
|
||||
p[0] = ('DATA',p[2])
|
||||
p[0] = ('DATA', p[2])
|
||||
|
||||
|
||||
def p_command_data_bad(p):
|
||||
'''command : DATA error'''
|
||||
p[0] = "MALFORMED NUMBER LIST IN DATA"
|
||||
|
||||
#### PRINT statement
|
||||
# PRINT statement
|
||||
|
||||
|
||||
def p_command_print(p):
|
||||
'''command : PRINT plist optend'''
|
||||
p[0] = ('PRINT',p[2],p[3])
|
||||
p[0] = ('PRINT', p[2], p[3])
|
||||
|
||||
|
||||
def p_command_print_bad(p):
|
||||
'''command : PRINT error'''
|
||||
p[0] = "MALFORMED PRINT STATEMENT"
|
||||
|
||||
#### Optional ending on PRINT. Either a comma (,) or semicolon (;)
|
||||
# Optional ending on PRINT. Either a comma (,) or semicolon (;)
|
||||
|
||||
|
||||
def p_optend(p):
|
||||
'''optend : COMMA
|
||||
| SEMI
|
||||
|'''
|
||||
if len(p) == 2:
|
||||
p[0] = p[1]
|
||||
if len(p) == 2:
|
||||
p[0] = p[1]
|
||||
else:
|
||||
p[0] = None
|
||||
p[0] = None
|
||||
|
||||
# PRINT statement with no arguments
|
||||
|
||||
#### PRINT statement with no arguments
|
||||
|
||||
def p_command_print_empty(p):
|
||||
'''command : PRINT'''
|
||||
p[0] = ('PRINT',[],None)
|
||||
p[0] = ('PRINT', [], None)
|
||||
|
||||
# GOTO statement
|
||||
|
||||
#### GOTO statement
|
||||
|
||||
def p_command_goto(p):
|
||||
'''command : GOTO INTEGER'''
|
||||
p[0] = ('GOTO',int(p[2]))
|
||||
p[0] = ('GOTO', int(p[2]))
|
||||
|
||||
|
||||
def p_command_goto_bad(p):
|
||||
'''command : GOTO error'''
|
||||
p[0] = "INVALID LINE NUMBER IN GOTO"
|
||||
|
||||
#### IF-THEN statement
|
||||
# IF-THEN statement
|
||||
|
||||
|
||||
def p_command_if(p):
|
||||
'''command : IF relexpr THEN INTEGER'''
|
||||
p[0] = ('IF',p[2],int(p[4]))
|
||||
p[0] = ('IF', p[2], int(p[4]))
|
||||
|
||||
|
||||
def p_command_if_bad(p):
|
||||
'''command : IF error THEN INTEGER'''
|
||||
p[0] = "BAD RELATIONAL EXPRESSION"
|
||||
|
||||
|
||||
def p_command_if_bad2(p):
|
||||
'''command : IF relexpr THEN error'''
|
||||
p[0] = "INVALID LINE NUMBER IN THEN"
|
||||
|
||||
#### FOR statement
|
||||
# FOR statement
|
||||
|
||||
|
||||
def p_command_for(p):
|
||||
'''command : FOR ID EQUALS expr TO expr optstep'''
|
||||
p[0] = ('FOR',p[2],p[4],p[6],p[7])
|
||||
p[0] = ('FOR', p[2], p[4], p[6], p[7])
|
||||
|
||||
|
||||
def p_command_for_bad_initial(p):
|
||||
'''command : FOR ID EQUALS error TO expr optstep'''
|
||||
p[0] = "BAD INITIAL VALUE IN FOR STATEMENT"
|
||||
|
||||
|
||||
def p_command_for_bad_final(p):
|
||||
'''command : FOR ID EQUALS expr TO error optstep'''
|
||||
p[0] = "BAD FINAL VALUE IN FOR STATEMENT"
|
||||
|
||||
|
||||
def p_command_for_bad_step(p):
|
||||
'''command : FOR ID EQUALS expr TO expr STEP error'''
|
||||
p[0] = "MALFORMED STEP IN FOR STATEMENT"
|
||||
|
||||
#### Optional STEP qualifier on FOR statement
|
||||
# Optional STEP qualifier on FOR statement
|
||||
|
||||
|
||||
def p_optstep(p):
|
||||
'''optstep : STEP expr
|
||||
| empty'''
|
||||
if len(p) == 3:
|
||||
p[0] = p[2]
|
||||
p[0] = p[2]
|
||||
else:
|
||||
p[0] = None
|
||||
p[0] = None
|
||||
|
||||
# NEXT statement
|
||||
|
||||
|
||||
#### NEXT statement
|
||||
|
||||
def p_command_next(p):
|
||||
'''command : NEXT ID'''
|
||||
|
||||
p[0] = ('NEXT',p[2])
|
||||
p[0] = ('NEXT', p[2])
|
||||
|
||||
|
||||
def p_command_next_bad(p):
|
||||
'''command : NEXT error'''
|
||||
p[0] = "MALFORMED NEXT"
|
||||
|
||||
#### END statement
|
||||
# END statement
|
||||
|
||||
|
||||
def p_command_end(p):
|
||||
'''command : END'''
|
||||
p[0] = ('END',)
|
||||
|
||||
#### REM statement
|
||||
# REM statement
|
||||
|
||||
|
||||
def p_command_rem(p):
|
||||
'''command : REM'''
|
||||
p[0] = ('REM',p[1])
|
||||
p[0] = ('REM', p[1])
|
||||
|
||||
# STOP statement
|
||||
|
||||
#### STOP statement
|
||||
|
||||
def p_command_stop(p):
|
||||
'''command : STOP'''
|
||||
p[0] = ('STOP',)
|
||||
|
||||
#### DEF statement
|
||||
# DEF statement
|
||||
|
||||
|
||||
def p_command_def(p):
|
||||
'''command : DEF ID LPAREN ID RPAREN EQUALS expr'''
|
||||
p[0] = ('FUNC',p[2],p[4],p[7])
|
||||
p[0] = ('FUNC', p[2], p[4], p[7])
|
||||
|
||||
|
||||
def p_command_def_bad_rhs(p):
|
||||
'''command : DEF ID LPAREN ID RPAREN EQUALS error'''
|
||||
p[0] = "BAD EXPRESSION IN DEF STATEMENT"
|
||||
|
||||
|
||||
def p_command_def_bad_arg(p):
|
||||
'''command : DEF ID LPAREN error RPAREN EQUALS expr'''
|
||||
p[0] = "BAD ARGUMENT IN DEF STATEMENT"
|
||||
|
||||
#### GOSUB statement
|
||||
# GOSUB statement
|
||||
|
||||
|
||||
def p_command_gosub(p):
|
||||
'''command : GOSUB INTEGER'''
|
||||
p[0] = ('GOSUB',int(p[2]))
|
||||
p[0] = ('GOSUB', int(p[2]))
|
||||
|
||||
|
||||
def p_command_gosub_bad(p):
|
||||
'''command : GOSUB error'''
|
||||
p[0] = "INVALID LINE NUMBER IN GOSUB"
|
||||
|
||||
#### RETURN statement
|
||||
# RETURN statement
|
||||
|
||||
|
||||
def p_command_return(p):
|
||||
'''command : RETURN'''
|
||||
p[0] = ('RETURN',)
|
||||
|
||||
#### DIM statement
|
||||
# DIM statement
|
||||
|
||||
|
||||
def p_command_dim(p):
|
||||
'''command : DIM dimlist'''
|
||||
p[0] = ('DIM',p[2])
|
||||
p[0] = ('DIM', p[2])
|
||||
|
||||
|
||||
def p_command_dim_bad(p):
|
||||
'''command : DIM error'''
|
||||
p[0] = "MALFORMED VARIABLE LIST IN DIM"
|
||||
|
||||
#### List of variables supplied to DIM statement
|
||||
# List of variables supplied to DIM statement
|
||||
|
||||
|
||||
def p_dimlist(p):
|
||||
'''dimlist : dimlist COMMA dimitem
|
||||
@@ -267,17 +310,20 @@ def p_dimlist(p):
|
||||
else:
|
||||
p[0] = [p[1]]
|
||||
|
||||
#### DIM items
|
||||
# DIM items
|
||||
|
||||
|
||||
def p_dimitem_single(p):
|
||||
'''dimitem : ID LPAREN INTEGER RPAREN'''
|
||||
p[0] = (p[1],eval(p[3]),0)
|
||||
p[0] = (p[1], eval(p[3]), 0)
|
||||
|
||||
|
||||
def p_dimitem_double(p):
|
||||
'''dimitem : ID LPAREN INTEGER COMMA INTEGER RPAREN'''
|
||||
p[0] = (p[1],eval(p[3]),eval(p[5]))
|
||||
p[0] = (p[1], eval(p[3]), eval(p[5]))
|
||||
|
||||
# Arithmetic expressions
|
||||
|
||||
#### Arithmetic expressions
|
||||
|
||||
def p_expr_binary(p):
|
||||
'''expr : expr PLUS expr
|
||||
@@ -286,26 +332,31 @@ def p_expr_binary(p):
|
||||
| expr DIVIDE expr
|
||||
| expr POWER expr'''
|
||||
|
||||
p[0] = ('BINOP',p[2],p[1],p[3])
|
||||
p[0] = ('BINOP', p[2], p[1], p[3])
|
||||
|
||||
|
||||
def p_expr_number(p):
|
||||
'''expr : INTEGER
|
||||
| FLOAT'''
|
||||
p[0] = ('NUM',eval(p[1]))
|
||||
p[0] = ('NUM', eval(p[1]))
|
||||
|
||||
|
||||
def p_expr_variable(p):
|
||||
'''expr : variable'''
|
||||
p[0] = ('VAR',p[1])
|
||||
p[0] = ('VAR', p[1])
|
||||
|
||||
|
||||
def p_expr_group(p):
|
||||
'''expr : LPAREN expr RPAREN'''
|
||||
p[0] = ('GROUP',p[2])
|
||||
p[0] = ('GROUP', p[2])
|
||||
|
||||
|
||||
def p_expr_unary(p):
|
||||
'''expr : MINUS expr %prec UMINUS'''
|
||||
p[0] = ('UNARY','-',p[2])
|
||||
p[0] = ('UNARY', '-', p[2])
|
||||
|
||||
# Relational expressions
|
||||
|
||||
#### Relational expressions
|
||||
|
||||
def p_relexpr(p):
|
||||
'''relexpr : expr LT expr
|
||||
@@ -314,111 +365,110 @@ def p_relexpr(p):
|
||||
| expr GE expr
|
||||
| expr EQUALS expr
|
||||
| expr NE expr'''
|
||||
p[0] = ('RELOP',p[2],p[1],p[3])
|
||||
p[0] = ('RELOP', p[2], p[1], p[3])
|
||||
|
||||
# Variables
|
||||
|
||||
#### Variables
|
||||
|
||||
def p_variable(p):
|
||||
'''variable : ID
|
||||
| ID LPAREN expr RPAREN
|
||||
| ID LPAREN expr COMMA expr RPAREN'''
|
||||
if len(p) == 2:
|
||||
p[0] = (p[1],None,None)
|
||||
p[0] = (p[1], None, None)
|
||||
elif len(p) == 5:
|
||||
p[0] = (p[1],p[3],None)
|
||||
p[0] = (p[1], p[3], None)
|
||||
else:
|
||||
p[0] = (p[1],p[3],p[5])
|
||||
p[0] = (p[1], p[3], p[5])
|
||||
|
||||
# Builds a list of variable targets as a Python list
|
||||
|
||||
#### Builds a list of variable targets as a Python list
|
||||
|
||||
def p_varlist(p):
|
||||
'''varlist : varlist COMMA variable
|
||||
| variable'''
|
||||
if len(p) > 2:
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
else:
|
||||
p[0] = [p[1]]
|
||||
p[0] = [p[1]]
|
||||
|
||||
|
||||
#### Builds a list of numbers as a Python list
|
||||
# Builds a list of numbers as a Python list
|
||||
|
||||
def p_numlist(p):
|
||||
'''numlist : numlist COMMA number
|
||||
| number'''
|
||||
|
||||
if len(p) > 2:
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
else:
|
||||
p[0] = [p[1]]
|
||||
p[0] = [p[1]]
|
||||
|
||||
# A number. May be an integer or a float
|
||||
|
||||
#### A number. May be an integer or a float
|
||||
|
||||
def p_number(p):
|
||||
'''number : INTEGER
|
||||
| FLOAT'''
|
||||
p[0] = eval(p[1])
|
||||
|
||||
#### A signed number.
|
||||
# A signed number.
|
||||
|
||||
|
||||
def p_number_signed(p):
|
||||
'''number : MINUS INTEGER
|
||||
| MINUS FLOAT'''
|
||||
p[0] = eval("-"+p[2])
|
||||
p[0] = eval("-" + p[2])
|
||||
|
||||
# List of targets for a print statement
|
||||
# Returns a list of tuples (label,expr)
|
||||
|
||||
#### List of targets for a print statement
|
||||
#### Returns a list of tuples (label,expr)
|
||||
|
||||
def p_plist(p):
|
||||
'''plist : plist COMMA pitem
|
||||
| pitem'''
|
||||
if len(p) > 3:
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
else:
|
||||
p[0] = [p[1]]
|
||||
p[0] = [p[1]]
|
||||
|
||||
|
||||
def p_item_string(p):
|
||||
'''pitem : STRING'''
|
||||
p[0] = (p[1][1:-1],None)
|
||||
p[0] = (p[1][1:-1], None)
|
||||
|
||||
|
||||
def p_item_string_expr(p):
|
||||
'''pitem : STRING expr'''
|
||||
p[0] = (p[1][1:-1],p[2])
|
||||
p[0] = (p[1][1:-1], p[2])
|
||||
|
||||
|
||||
def p_item_expr(p):
|
||||
'''pitem : expr'''
|
||||
p[0] = ("",p[1])
|
||||
p[0] = ("", p[1])
|
||||
|
||||
# Empty
|
||||
|
||||
|
||||
#### Empty
|
||||
|
||||
def p_empty(p):
|
||||
'''empty : '''
|
||||
|
||||
#### Catastrophic error handler
|
||||
# Catastrophic error handler
|
||||
|
||||
|
||||
def p_error(p):
|
||||
if not p:
|
||||
print("SYNTAX ERROR AT EOF")
|
||||
|
||||
bparser = yacc.yacc()
|
||||
|
||||
def parse(data,debug=0):
|
||||
|
||||
def parse(data, debug=0):
|
||||
bparser.error = 0
|
||||
p = bparser.parse(data,debug=debug)
|
||||
if bparser.error: return None
|
||||
p = bparser.parse(data, debug=debug)
|
||||
if bparser.error:
|
||||
return None
|
||||
return p
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@
|
||||
|
||||
# Modifications for inclusion in PLY distribution
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
from ply import *
|
||||
|
||||
##### Lexer ######
|
||||
@@ -69,18 +69,21 @@ tokens = (
|
||||
'INDENT',
|
||||
'DEDENT',
|
||||
'ENDMARKER',
|
||||
)
|
||||
)
|
||||
|
||||
#t_NUMBER = r'\d+'
|
||||
# taken from decmial.py but without the leading sign
|
||||
|
||||
|
||||
def t_NUMBER(t):
|
||||
r"""(\d+(\.\d*)?|\.\d+)([eE][-+]? \d+)?"""
|
||||
t.value = decimal.Decimal(t.value)
|
||||
return t
|
||||
|
||||
|
||||
def t_STRING(t):
|
||||
r"'([^\\']+|\\'|\\\\)*'" # I think this is right ...
|
||||
t.value=t.value[1:-1].decode("string-escape") # .swapcase() # for fun
|
||||
t.value = t.value[1:-1].decode("string-escape") # .swapcase() # for fun
|
||||
return t
|
||||
|
||||
t_COLON = r':'
|
||||
@@ -98,10 +101,11 @@ t_SEMICOLON = r';'
|
||||
# Ply nicely documented how to do this.
|
||||
|
||||
RESERVED = {
|
||||
"def": "DEF",
|
||||
"if": "IF",
|
||||
"return": "RETURN",
|
||||
}
|
||||
"def": "DEF",
|
||||
"if": "IF",
|
||||
"return": "RETURN",
|
||||
}
|
||||
|
||||
|
||||
def t_NAME(t):
|
||||
r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
@@ -111,6 +115,8 @@ def t_NAME(t):
|
||||
# Putting this before t_WS let it consume lines with only comments in
|
||||
# them so the latter code never sees the WS part. Not consuming the
|
||||
# newline. Needed for "if 1: #comment"
|
||||
|
||||
|
||||
def t_comment(t):
|
||||
r"[ ]*\043[^\n]*" # \043 is '#'
|
||||
pass
|
||||
@@ -125,6 +131,8 @@ def t_WS(t):
|
||||
# Don't generate newline tokens when inside of parenthesis, eg
|
||||
# a = (1,
|
||||
# 2, 3)
|
||||
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += len(t.value)
|
||||
@@ -132,11 +140,13 @@ def t_newline(t):
|
||||
if t.lexer.paren_count == 0:
|
||||
return t
|
||||
|
||||
|
||||
def t_LPAR(t):
|
||||
r'\('
|
||||
t.lexer.paren_count += 1
|
||||
return t
|
||||
|
||||
|
||||
def t_RPAR(t):
|
||||
r'\)'
|
||||
# check for underflow? should be the job of the parser
|
||||
@@ -149,7 +159,7 @@ def t_error(t):
|
||||
print "Skipping", repr(t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
## I implemented INDENT / DEDENT generation as a post-processing filter
|
||||
# I implemented INDENT / DEDENT generation as a post-processing filter
|
||||
|
||||
# The original lex token stream contains WS and NEWLINE characters.
|
||||
# WS will only occur before any other tokens on a line.
|
||||
@@ -169,6 +179,8 @@ MAY_INDENT = 1
|
||||
MUST_INDENT = 2
|
||||
|
||||
# only care about whitespace at the start of a line
|
||||
|
||||
|
||||
def track_tokens_filter(lexer, tokens):
|
||||
lexer.at_line_start = at_line_start = True
|
||||
indent = NO_INDENT
|
||||
@@ -180,7 +192,7 @@ def track_tokens_filter(lexer, tokens):
|
||||
at_line_start = False
|
||||
indent = MAY_INDENT
|
||||
token.must_indent = False
|
||||
|
||||
|
||||
elif token.type == "NEWLINE":
|
||||
at_line_start = True
|
||||
if indent == MAY_INDENT:
|
||||
@@ -204,6 +216,7 @@ def track_tokens_filter(lexer, tokens):
|
||||
yield token
|
||||
lexer.at_line_start = at_line_start
|
||||
|
||||
|
||||
def _new_token(type, lineno):
|
||||
tok = lex.LexToken()
|
||||
tok.type = type
|
||||
@@ -212,10 +225,14 @@ def _new_token(type, lineno):
|
||||
return tok
|
||||
|
||||
# Synthesize a DEDENT tag
|
||||
|
||||
|
||||
def DEDENT(lineno):
|
||||
return _new_token("DEDENT", lineno)
|
||||
|
||||
# Synthesize an INDENT tag
|
||||
|
||||
|
||||
def INDENT(lineno):
|
||||
return _new_token("INDENT", lineno)
|
||||
|
||||
@@ -228,14 +245,14 @@ def indentation_filter(tokens):
|
||||
depth = 0
|
||||
prev_was_ws = False
|
||||
for token in tokens:
|
||||
## if 1:
|
||||
## print "Process", token,
|
||||
## if token.at_line_start:
|
||||
## print "at_line_start",
|
||||
## if token.must_indent:
|
||||
## print "must_indent",
|
||||
## print
|
||||
|
||||
# if 1:
|
||||
# print "Process", token,
|
||||
# if token.at_line_start:
|
||||
# print "at_line_start",
|
||||
# if token.must_indent:
|
||||
# print "must_indent",
|
||||
# print
|
||||
|
||||
# WS only occurs at the start of the line
|
||||
# There may be WS followed by NEWLINE so
|
||||
# only track the depth here. Don't indent/dedent
|
||||
@@ -274,14 +291,15 @@ def indentation_filter(tokens):
|
||||
# At the same level
|
||||
pass
|
||||
elif depth > levels[-1]:
|
||||
raise IndentationError("indentation increase but not in new block")
|
||||
raise IndentationError(
|
||||
"indentation increase but not in new block")
|
||||
else:
|
||||
# Back up; but only if it matches a previous level
|
||||
try:
|
||||
i = levels.index(depth)
|
||||
except ValueError:
|
||||
raise IndentationError("inconsistent indentation")
|
||||
for _ in range(i+1, len(levels)):
|
||||
for _ in range(i + 1, len(levels)):
|
||||
yield DEDENT(token.lineno)
|
||||
levels.pop()
|
||||
|
||||
@@ -294,11 +312,11 @@ def indentation_filter(tokens):
|
||||
assert token is not None
|
||||
for _ in range(1, len(levels)):
|
||||
yield DEDENT(token.lineno)
|
||||
|
||||
|
||||
|
||||
# The top-level filter adds an ENDMARKER, if requested.
|
||||
# Python's grammar uses it.
|
||||
def filter(lexer, add_endmarker = True):
|
||||
def filter(lexer, add_endmarker=True):
|
||||
token = None
|
||||
tokens = iter(lexer.token, None)
|
||||
tokens = track_tokens_filter(lexer, tokens)
|
||||
@@ -313,14 +331,19 @@ def filter(lexer, add_endmarker = True):
|
||||
|
||||
# Combine Ply and my filters into a new lexer
|
||||
|
||||
|
||||
class IndentLexer(object):
|
||||
|
||||
def __init__(self, debug=0, optimize=0, lextab='lextab', reflags=0):
|
||||
self.lexer = lex.lex(debug=debug, optimize=optimize, lextab=lextab, reflags=reflags)
|
||||
self.lexer = lex.lex(debug=debug, optimize=optimize,
|
||||
lextab=lextab, reflags=reflags)
|
||||
self.token_stream = None
|
||||
|
||||
def input(self, s, add_endmarker=True):
|
||||
self.lexer.paren_count = 0
|
||||
self.lexer.input(s)
|
||||
self.token_stream = filter(self.lexer, add_endmarker)
|
||||
|
||||
def token(self):
|
||||
try:
|
||||
return self.token_stream.next()
|
||||
@@ -336,6 +359,8 @@ class IndentLexer(object):
|
||||
from compiler import ast
|
||||
|
||||
# Helper function
|
||||
|
||||
|
||||
def Assign(left, right):
|
||||
names = []
|
||||
if isinstance(left, ast.Name):
|
||||
@@ -356,35 +381,39 @@ def Assign(left, right):
|
||||
|
||||
# The grammar comments come from Python's Grammar/Grammar file
|
||||
|
||||
## NB: compound_stmt in single_input is followed by extra NEWLINE!
|
||||
# NB: compound_stmt in single_input is followed by extra NEWLINE!
|
||||
# file_input: (NEWLINE | stmt)* ENDMARKER
|
||||
def p_file_input_end(p):
|
||||
"""file_input_end : file_input ENDMARKER"""
|
||||
p[0] = ast.Stmt(p[1])
|
||||
|
||||
|
||||
def p_file_input(p):
|
||||
"""file_input : file_input NEWLINE
|
||||
| file_input stmt
|
||||
| NEWLINE
|
||||
| stmt"""
|
||||
if isinstance(p[len(p)-1], basestring):
|
||||
if isinstance(p[len(p) - 1], basestring):
|
||||
if len(p) == 3:
|
||||
p[0] = p[1]
|
||||
else:
|
||||
p[0] = [] # p == 2 --> only a blank line
|
||||
p[0] = [] # p == 2 --> only a blank line
|
||||
else:
|
||||
if len(p) == 3:
|
||||
p[0] = p[1] + p[2]
|
||||
else:
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
|
||||
# funcdef: [decorators] 'def' NAME parameters ':' suite
|
||||
# ignoring decorators
|
||||
def p_funcdef(p):
|
||||
"funcdef : DEF NAME parameters COLON suite"
|
||||
p[0] = ast.Function(None, p[2], tuple(p[3]), (), 0, None, p[5])
|
||||
|
||||
|
||||
# parameters: '(' [varargslist] ')'
|
||||
|
||||
|
||||
def p_parameters(p):
|
||||
"""parameters : LPAR RPAR
|
||||
| LPAR varargslist RPAR"""
|
||||
@@ -392,9 +421,9 @@ def p_parameters(p):
|
||||
p[0] = []
|
||||
else:
|
||||
p[0] = p[2]
|
||||
|
||||
|
||||
# varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) |
|
||||
|
||||
# varargslist: (fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) |
|
||||
# highly simplified
|
||||
def p_varargslist(p):
|
||||
"""varargslist : varargslist COMMA NAME
|
||||
@@ -405,21 +434,27 @@ def p_varargslist(p):
|
||||
p[0] = [p[1]]
|
||||
|
||||
# stmt: simple_stmt | compound_stmt
|
||||
|
||||
|
||||
def p_stmt_simple(p):
|
||||
"""stmt : simple_stmt"""
|
||||
# simple_stmt is a list
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
|
||||
def p_stmt_compound(p):
|
||||
"""stmt : compound_stmt"""
|
||||
p[0] = [p[1]]
|
||||
|
||||
# simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
|
||||
|
||||
|
||||
def p_simple_stmt(p):
|
||||
"""simple_stmt : small_stmts NEWLINE
|
||||
| small_stmts SEMICOLON NEWLINE"""
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_small_stmts(p):
|
||||
"""small_stmts : small_stmts SEMICOLON small_stmt
|
||||
| small_stmt"""
|
||||
@@ -430,6 +465,8 @@ def p_small_stmts(p):
|
||||
|
||||
# small_stmt: expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
|
||||
# import_stmt | global_stmt | exec_stmt | assert_stmt
|
||||
|
||||
|
||||
def p_small_stmt(p):
|
||||
"""small_stmt : flow_stmt
|
||||
| expr_stmt"""
|
||||
@@ -439,6 +476,8 @@ def p_small_stmt(p):
|
||||
# ('=' (yield_expr|testlist))*)
|
||||
# augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
|
||||
# '<<=' | '>>=' | '**=' | '//=')
|
||||
|
||||
|
||||
def p_expr_stmt(p):
|
||||
"""expr_stmt : testlist ASSIGN testlist
|
||||
| testlist """
|
||||
@@ -448,11 +487,14 @@ def p_expr_stmt(p):
|
||||
else:
|
||||
p[0] = Assign(p[1], p[3])
|
||||
|
||||
|
||||
def p_flow_stmt(p):
|
||||
"flow_stmt : return_stmt"
|
||||
p[0] = p[1]
|
||||
|
||||
# return_stmt: 'return' [testlist]
|
||||
|
||||
|
||||
def p_return_stmt(p):
|
||||
"return_stmt : RETURN testlist"
|
||||
p[0] = ast.Return(p[2])
|
||||
@@ -463,10 +505,12 @@ def p_compound_stmt(p):
|
||||
| funcdef"""
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_if_stmt(p):
|
||||
'if_stmt : IF test COLON suite'
|
||||
p[0] = ast.If([(p[2], p[4])], None)
|
||||
|
||||
|
||||
def p_suite(p):
|
||||
"""suite : simple_stmt
|
||||
| NEWLINE INDENT stmts DEDENT"""
|
||||
@@ -474,7 +518,7 @@ def p_suite(p):
|
||||
p[0] = ast.Stmt(p[1])
|
||||
else:
|
||||
p[0] = ast.Stmt(p[3])
|
||||
|
||||
|
||||
|
||||
def p_stmts(p):
|
||||
"""stmts : stmts stmt
|
||||
@@ -484,7 +528,7 @@ def p_stmts(p):
|
||||
else:
|
||||
p[0] = p[1]
|
||||
|
||||
## No using Python's approach because Ply supports precedence
|
||||
# No using Python's approach because Ply supports precedence
|
||||
|
||||
# comparison: expr (comp_op expr)*
|
||||
# arith_expr: term (('+'|'-') term)*
|
||||
@@ -492,12 +536,17 @@ def p_stmts(p):
|
||||
# factor: ('+'|'-'|'~') factor | power
|
||||
# comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
|
||||
|
||||
|
||||
def make_lt_compare((left, right)):
|
||||
return ast.Compare(left, [('<', right),])
|
||||
return ast.Compare(left, [('<', right), ])
|
||||
|
||||
|
||||
def make_gt_compare((left, right)):
|
||||
return ast.Compare(left, [('>', right),])
|
||||
return ast.Compare(left, [('>', right), ])
|
||||
|
||||
|
||||
def make_eq_compare((left, right)):
|
||||
return ast.Compare(left, [('==', right),])
|
||||
return ast.Compare(left, [('==', right), ])
|
||||
|
||||
|
||||
binary_ops = {
|
||||
@@ -512,12 +561,13 @@ binary_ops = {
|
||||
unary_ops = {
|
||||
"+": ast.UnaryAdd,
|
||||
"-": ast.UnarySub,
|
||||
}
|
||||
}
|
||||
precedence = (
|
||||
("left", "EQ", "GT", "LT"),
|
||||
("left", "PLUS", "MINUS"),
|
||||
("left", "MULT", "DIV"),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def p_comparison(p):
|
||||
"""comparison : comparison PLUS comparison
|
||||
@@ -536,10 +586,12 @@ def p_comparison(p):
|
||||
p[0] = unary_ops[p[1]](p[2])
|
||||
else:
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
# power: atom trailer* ['**' factor]
|
||||
# trailers enables function calls. I only allow one level of calls
|
||||
# so this is 'trailer'
|
||||
|
||||
|
||||
def p_power(p):
|
||||
"""power : atom
|
||||
| atom trailer"""
|
||||
@@ -551,26 +603,33 @@ def p_power(p):
|
||||
else:
|
||||
raise AssertionError("not implemented")
|
||||
|
||||
|
||||
def p_atom_name(p):
|
||||
"""atom : NAME"""
|
||||
p[0] = ast.Name(p[1])
|
||||
|
||||
|
||||
def p_atom_number(p):
|
||||
"""atom : NUMBER
|
||||
| STRING"""
|
||||
p[0] = ast.Const(p[1])
|
||||
|
||||
|
||||
def p_atom_tuple(p):
|
||||
"""atom : LPAR testlist RPAR"""
|
||||
p[0] = p[2]
|
||||
|
||||
# trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
|
||||
|
||||
|
||||
def p_trailer(p):
|
||||
"trailer : LPAR arglist RPAR"
|
||||
p[0] = ("CALL", p[2])
|
||||
|
||||
# testlist: test (',' test)* [',']
|
||||
# Contains shift/reduce error
|
||||
|
||||
|
||||
def p_testlist(p):
|
||||
"""testlist : testlist_multi COMMA
|
||||
| testlist_multi """
|
||||
@@ -586,6 +645,7 @@ def p_testlist(p):
|
||||
if isinstance(p[0], list):
|
||||
p[0] = ast.Tuple(p[0])
|
||||
|
||||
|
||||
def p_testlist_multi(p):
|
||||
"""testlist_multi : testlist_multi COMMA test
|
||||
| test"""
|
||||
@@ -605,7 +665,6 @@ def p_testlist_multi(p):
|
||||
def p_test(p):
|
||||
"test : comparison"
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
|
||||
# arglist: (argument ',')* (argument [',']| '*' test [',' '**' test] | '**' test)
|
||||
@@ -619,17 +678,21 @@ def p_arglist(p):
|
||||
p[0] = [p[1]]
|
||||
|
||||
# argument: test [gen_for] | test '=' test # Really [keyword '='] test
|
||||
|
||||
|
||||
def p_argument(p):
|
||||
"argument : test"
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_error(p):
|
||||
#print "Error!", repr(p)
|
||||
# print "Error!", repr(p)
|
||||
raise SyntaxError(p)
|
||||
|
||||
|
||||
class GardenSnakeParser(object):
|
||||
def __init__(self, lexer = None):
|
||||
|
||||
def __init__(self, lexer=None):
|
||||
if lexer is None:
|
||||
lexer = IndentLexer()
|
||||
self.lexer = lexer
|
||||
@@ -637,20 +700,23 @@ class GardenSnakeParser(object):
|
||||
|
||||
def parse(self, code):
|
||||
self.lexer.input(code)
|
||||
result = self.parser.parse(lexer = self.lexer)
|
||||
result = self.parser.parse(lexer=self.lexer)
|
||||
return ast.Module(None, result)
|
||||
|
||||
|
||||
###### Code generation ######
|
||||
|
||||
|
||||
from compiler import misc, syntax, pycodegen
|
||||
|
||||
|
||||
class GardenSnakeCompiler(object):
|
||||
|
||||
def __init__(self):
|
||||
self.parser = GardenSnakeParser()
|
||||
|
||||
def compile(self, code, filename="<string>"):
|
||||
tree = self.parser.parse(code)
|
||||
#print tree
|
||||
# print tree
|
||||
misc.set_filename(filename, tree)
|
||||
syntax.check(tree)
|
||||
gen = pycodegen.ModuleCodeGenerator(tree)
|
||||
@@ -658,7 +724,7 @@ class GardenSnakeCompiler(object):
|
||||
return code
|
||||
|
||||
####### Test code #######
|
||||
|
||||
|
||||
compile = GardenSnakeCompiler().compile
|
||||
|
||||
code = r"""
|
||||
@@ -698,8 +764,10 @@ print('BIG DECIMAL', 1.234567891234567e12345)
|
||||
"""
|
||||
|
||||
# Set up the GardenSnake run-time environment
|
||||
|
||||
|
||||
def print_(*args):
|
||||
print "-->", " ".join(map(str,args))
|
||||
print "-->", " ".join(map(str, args))
|
||||
|
||||
globals()["print"] = print_
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
@@ -15,10 +15,11 @@ reserved = (
|
||||
'ELSE', 'ENUM', 'EXTERN', 'FLOAT', 'FOR', 'GOTO', 'IF', 'INT', 'LONG', 'REGISTER',
|
||||
'RETURN', 'SHORT', 'SIGNED', 'SIZEOF', 'STATIC', 'STRUCT', 'SWITCH', 'TYPEDEF',
|
||||
'UNION', 'UNSIGNED', 'VOID', 'VOLATILE', 'WHILE',
|
||||
)
|
||||
)
|
||||
|
||||
tokens = reserved + (
|
||||
# Literals (identifier, integer constant, float constant, string constant, char const)
|
||||
# Literals (identifier, integer constant, float constant, string constant,
|
||||
# char const)
|
||||
'ID', 'TYPEID', 'ICONST', 'FCONST', 'SCONST', 'CCONST',
|
||||
|
||||
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
|
||||
@@ -26,10 +27,10 @@ tokens = reserved + (
|
||||
'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
|
||||
'LOR', 'LAND', 'LNOT',
|
||||
'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
|
||||
|
||||
|
||||
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
|
||||
'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
|
||||
'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
|
||||
'LSHIFTEQUAL', 'RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
|
||||
|
||||
# Increment/decrement (++,--)
|
||||
'PLUSPLUS', 'MINUSMINUS',
|
||||
@@ -39,7 +40,7 @@ tokens = reserved + (
|
||||
|
||||
# Conditional operator (?)
|
||||
'CONDOP',
|
||||
|
||||
|
||||
# Delimeters ( ) [ ] { } , . ; :
|
||||
'LPAREN', 'RPAREN',
|
||||
'LBRACKET', 'RBRACKET',
|
||||
@@ -48,84 +49,87 @@ tokens = reserved + (
|
||||
|
||||
# Ellipsis (...)
|
||||
'ELLIPSIS',
|
||||
)
|
||||
)
|
||||
|
||||
# Completely ignored characters
|
||||
t_ignore = ' \t\x0c'
|
||||
t_ignore = ' \t\x0c'
|
||||
|
||||
# Newlines
|
||||
|
||||
|
||||
def t_NEWLINE(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
# Operators
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_MOD = r'%'
|
||||
t_OR = r'\|'
|
||||
t_AND = r'&'
|
||||
t_NOT = r'~'
|
||||
t_XOR = r'\^'
|
||||
t_LSHIFT = r'<<'
|
||||
t_RSHIFT = r'>>'
|
||||
t_LOR = r'\|\|'
|
||||
t_LAND = r'&&'
|
||||
t_LNOT = r'!'
|
||||
t_LT = r'<'
|
||||
t_GT = r'>'
|
||||
t_LE = r'<='
|
||||
t_GE = r'>='
|
||||
t_EQ = r'=='
|
||||
t_NE = r'!='
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_MOD = r'%'
|
||||
t_OR = r'\|'
|
||||
t_AND = r'&'
|
||||
t_NOT = r'~'
|
||||
t_XOR = r'\^'
|
||||
t_LSHIFT = r'<<'
|
||||
t_RSHIFT = r'>>'
|
||||
t_LOR = r'\|\|'
|
||||
t_LAND = r'&&'
|
||||
t_LNOT = r'!'
|
||||
t_LT = r'<'
|
||||
t_GT = r'>'
|
||||
t_LE = r'<='
|
||||
t_GE = r'>='
|
||||
t_EQ = r'=='
|
||||
t_NE = r'!='
|
||||
|
||||
# Assignment operators
|
||||
|
||||
t_EQUALS = r'='
|
||||
t_TIMESEQUAL = r'\*='
|
||||
t_DIVEQUAL = r'/='
|
||||
t_MODEQUAL = r'%='
|
||||
t_PLUSEQUAL = r'\+='
|
||||
t_MINUSEQUAL = r'-='
|
||||
t_LSHIFTEQUAL = r'<<='
|
||||
t_RSHIFTEQUAL = r'>>='
|
||||
t_ANDEQUAL = r'&='
|
||||
t_OREQUAL = r'\|='
|
||||
t_XOREQUAL = r'^='
|
||||
t_EQUALS = r'='
|
||||
t_TIMESEQUAL = r'\*='
|
||||
t_DIVEQUAL = r'/='
|
||||
t_MODEQUAL = r'%='
|
||||
t_PLUSEQUAL = r'\+='
|
||||
t_MINUSEQUAL = r'-='
|
||||
t_LSHIFTEQUAL = r'<<='
|
||||
t_RSHIFTEQUAL = r'>>='
|
||||
t_ANDEQUAL = r'&='
|
||||
t_OREQUAL = r'\|='
|
||||
t_XOREQUAL = r'\^='
|
||||
|
||||
# Increment/decrement
|
||||
t_PLUSPLUS = r'\+\+'
|
||||
t_MINUSMINUS = r'--'
|
||||
t_PLUSPLUS = r'\+\+'
|
||||
t_MINUSMINUS = r'--'
|
||||
|
||||
# ->
|
||||
t_ARROW = r'->'
|
||||
t_ARROW = r'->'
|
||||
|
||||
# ?
|
||||
t_CONDOP = r'\?'
|
||||
t_CONDOP = r'\?'
|
||||
|
||||
# Delimeters
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_LBRACKET = r'\['
|
||||
t_RBRACKET = r'\]'
|
||||
t_LBRACE = r'\{'
|
||||
t_RBRACE = r'\}'
|
||||
t_COMMA = r','
|
||||
t_PERIOD = r'\.'
|
||||
t_SEMI = r';'
|
||||
t_COLON = r':'
|
||||
t_ELLIPSIS = r'\.\.\.'
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_LBRACKET = r'\['
|
||||
t_RBRACKET = r'\]'
|
||||
t_LBRACE = r'\{'
|
||||
t_RBRACE = r'\}'
|
||||
t_COMMA = r','
|
||||
t_PERIOD = r'\.'
|
||||
t_SEMI = r';'
|
||||
t_COLON = r':'
|
||||
t_ELLIPSIS = r'\.\.\.'
|
||||
|
||||
# Identifiers and reserved words
|
||||
|
||||
reserved_map = { }
|
||||
reserved_map = {}
|
||||
for r in reserved:
|
||||
reserved_map[r.lower()] = r
|
||||
|
||||
|
||||
def t_ID(t):
|
||||
r'[A-Za-z_][\w_]*'
|
||||
t.type = reserved_map.get(t.value,"ID")
|
||||
t.type = reserved_map.get(t.value, "ID")
|
||||
return t
|
||||
|
||||
# Integer literal
|
||||
@@ -141,24 +145,24 @@ t_SCONST = r'\"([^\\\n]|(\\.))*?\"'
|
||||
t_CCONST = r'(L)?\'([^\\\n]|(\\.))*?\''
|
||||
|
||||
# Comments
|
||||
|
||||
|
||||
def t_comment(t):
|
||||
r'/\*(.|\n)*?\*/'
|
||||
t.lexer.lineno += t.value.count('\n')
|
||||
|
||||
# Preprocessor directive (ignored)
|
||||
|
||||
|
||||
def t_preprocessor(t):
|
||||
r'\#(.)*?\n'
|
||||
t.lexer.lineno += 1
|
||||
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character %s" % repr(t.value[0]))
|
||||
t.lexer.skip(1)
|
||||
|
||||
lexer = lex.lex(optimize=1)
|
||||
|
||||
lexer = lex.lex()
|
||||
if __name__ == "__main__":
|
||||
lex.runmain(lexer)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -13,88 +13,109 @@ tokens = clex.tokens
|
||||
|
||||
# translation-unit:
|
||||
|
||||
|
||||
def p_translation_unit_1(t):
|
||||
'translation_unit : external_declaration'
|
||||
pass
|
||||
|
||||
|
||||
def p_translation_unit_2(t):
|
||||
'translation_unit : translation_unit external_declaration'
|
||||
pass
|
||||
|
||||
# external-declaration:
|
||||
|
||||
|
||||
def p_external_declaration_1(t):
|
||||
'external_declaration : function_definition'
|
||||
pass
|
||||
|
||||
|
||||
def p_external_declaration_2(t):
|
||||
'external_declaration : declaration'
|
||||
pass
|
||||
|
||||
# function-definition:
|
||||
|
||||
|
||||
def p_function_definition_1(t):
|
||||
'function_definition : declaration_specifiers declarator declaration_list compound_statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_function_definition_2(t):
|
||||
'function_definition : declarator declaration_list compound_statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_function_definition_3(t):
|
||||
'function_definition : declarator compound_statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_function_definition_4(t):
|
||||
'function_definition : declaration_specifiers declarator compound_statement'
|
||||
pass
|
||||
|
||||
# declaration:
|
||||
|
||||
|
||||
def p_declaration_1(t):
|
||||
'declaration : declaration_specifiers init_declarator_list SEMI'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_2(t):
|
||||
'declaration : declaration_specifiers SEMI'
|
||||
pass
|
||||
|
||||
# declaration-list:
|
||||
|
||||
|
||||
def p_declaration_list_1(t):
|
||||
'declaration_list : declaration'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_list_2(t):
|
||||
'declaration_list : declaration_list declaration '
|
||||
pass
|
||||
|
||||
# declaration-specifiers
|
||||
|
||||
|
||||
def p_declaration_specifiers_1(t):
|
||||
'declaration_specifiers : storage_class_specifier declaration_specifiers'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_specifiers_2(t):
|
||||
'declaration_specifiers : type_specifier declaration_specifiers'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_specifiers_3(t):
|
||||
'declaration_specifiers : type_qualifier declaration_specifiers'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_specifiers_4(t):
|
||||
'declaration_specifiers : storage_class_specifier'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_specifiers_5(t):
|
||||
'declaration_specifiers : type_specifier'
|
||||
pass
|
||||
|
||||
|
||||
def p_declaration_specifiers_6(t):
|
||||
'declaration_specifiers : type_qualifier'
|
||||
pass
|
||||
|
||||
# storage-class-specifier
|
||||
|
||||
|
||||
def p_storage_class_specifier(t):
|
||||
'''storage_class_specifier : AUTO
|
||||
| REGISTER
|
||||
@@ -105,6 +126,8 @@ def p_storage_class_specifier(t):
|
||||
pass
|
||||
|
||||
# type-specifier:
|
||||
|
||||
|
||||
def p_type_specifier(t):
|
||||
'''type_specifier : VOID
|
||||
| CHAR
|
||||
@@ -122,6 +145,8 @@ def p_type_specifier(t):
|
||||
pass
|
||||
|
||||
# type-qualifier:
|
||||
|
||||
|
||||
def p_type_qualifier(t):
|
||||
'''type_qualifier : CONST
|
||||
| VOLATILE'''
|
||||
@@ -129,19 +154,24 @@ def p_type_qualifier(t):
|
||||
|
||||
# struct-or-union-specifier
|
||||
|
||||
|
||||
def p_struct_or_union_specifier_1(t):
|
||||
'struct_or_union_specifier : struct_or_union ID LBRACE struct_declaration_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_struct_or_union_specifier_2(t):
|
||||
'struct_or_union_specifier : struct_or_union LBRACE struct_declaration_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_struct_or_union_specifier_3(t):
|
||||
'struct_or_union_specifier : struct_or_union ID'
|
||||
pass
|
||||
|
||||
# struct-or-union:
|
||||
|
||||
|
||||
def p_struct_or_union(t):
|
||||
'''struct_or_union : STRUCT
|
||||
| UNION
|
||||
@@ -150,221 +180,273 @@ def p_struct_or_union(t):
|
||||
|
||||
# struct-declaration-list:
|
||||
|
||||
|
||||
def p_struct_declaration_list_1(t):
|
||||
'struct_declaration_list : struct_declaration'
|
||||
pass
|
||||
|
||||
|
||||
def p_struct_declaration_list_2(t):
|
||||
'struct_declaration_list : struct_declaration_list struct_declaration'
|
||||
pass
|
||||
|
||||
# init-declarator-list:
|
||||
|
||||
|
||||
def p_init_declarator_list_1(t):
|
||||
'init_declarator_list : init_declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_init_declarator_list_2(t):
|
||||
'init_declarator_list : init_declarator_list COMMA init_declarator'
|
||||
pass
|
||||
|
||||
# init-declarator
|
||||
|
||||
|
||||
def p_init_declarator_1(t):
|
||||
'init_declarator : declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_init_declarator_2(t):
|
||||
'init_declarator : declarator EQUALS initializer'
|
||||
pass
|
||||
|
||||
# struct-declaration:
|
||||
|
||||
|
||||
def p_struct_declaration(t):
|
||||
'struct_declaration : specifier_qualifier_list struct_declarator_list SEMI'
|
||||
pass
|
||||
|
||||
# specifier-qualifier-list:
|
||||
|
||||
|
||||
def p_specifier_qualifier_list_1(t):
|
||||
'specifier_qualifier_list : type_specifier specifier_qualifier_list'
|
||||
pass
|
||||
|
||||
|
||||
def p_specifier_qualifier_list_2(t):
|
||||
'specifier_qualifier_list : type_specifier'
|
||||
pass
|
||||
|
||||
|
||||
def p_specifier_qualifier_list_3(t):
|
||||
'specifier_qualifier_list : type_qualifier specifier_qualifier_list'
|
||||
pass
|
||||
|
||||
|
||||
def p_specifier_qualifier_list_4(t):
|
||||
'specifier_qualifier_list : type_qualifier'
|
||||
pass
|
||||
|
||||
# struct-declarator-list:
|
||||
|
||||
|
||||
def p_struct_declarator_list_1(t):
|
||||
'struct_declarator_list : struct_declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_struct_declarator_list_2(t):
|
||||
'struct_declarator_list : struct_declarator_list COMMA struct_declarator'
|
||||
pass
|
||||
|
||||
# struct-declarator:
|
||||
|
||||
|
||||
def p_struct_declarator_1(t):
|
||||
'struct_declarator : declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_struct_declarator_2(t):
|
||||
'struct_declarator : declarator COLON constant_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_struct_declarator_3(t):
|
||||
'struct_declarator : COLON constant_expression'
|
||||
pass
|
||||
|
||||
# enum-specifier:
|
||||
|
||||
|
||||
def p_enum_specifier_1(t):
|
||||
'enum_specifier : ENUM ID LBRACE enumerator_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_enum_specifier_2(t):
|
||||
'enum_specifier : ENUM LBRACE enumerator_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_enum_specifier_3(t):
|
||||
'enum_specifier : ENUM ID'
|
||||
pass
|
||||
|
||||
# enumerator_list:
|
||||
|
||||
|
||||
def p_enumerator_list_1(t):
|
||||
'enumerator_list : enumerator'
|
||||
pass
|
||||
|
||||
|
||||
def p_enumerator_list_2(t):
|
||||
'enumerator_list : enumerator_list COMMA enumerator'
|
||||
pass
|
||||
|
||||
# enumerator:
|
||||
|
||||
|
||||
def p_enumerator_1(t):
|
||||
'enumerator : ID'
|
||||
pass
|
||||
|
||||
|
||||
def p_enumerator_2(t):
|
||||
'enumerator : ID EQUALS constant_expression'
|
||||
pass
|
||||
|
||||
# declarator:
|
||||
|
||||
|
||||
def p_declarator_1(t):
|
||||
'declarator : pointer direct_declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_declarator_2(t):
|
||||
'declarator : direct_declarator'
|
||||
pass
|
||||
|
||||
# direct-declarator:
|
||||
|
||||
|
||||
def p_direct_declarator_1(t):
|
||||
'direct_declarator : ID'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_declarator_2(t):
|
||||
'direct_declarator : LPAREN declarator RPAREN'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_declarator_3(t):
|
||||
'direct_declarator : direct_declarator LBRACKET constant_expression_opt RBRACKET'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_declarator_4(t):
|
||||
'direct_declarator : direct_declarator LPAREN parameter_type_list RPAREN '
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_declarator_5(t):
|
||||
'direct_declarator : direct_declarator LPAREN identifier_list RPAREN '
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_declarator_6(t):
|
||||
'direct_declarator : direct_declarator LPAREN RPAREN '
|
||||
pass
|
||||
|
||||
# pointer:
|
||||
|
||||
|
||||
def p_pointer_1(t):
|
||||
'pointer : TIMES type_qualifier_list'
|
||||
pass
|
||||
|
||||
|
||||
def p_pointer_2(t):
|
||||
'pointer : TIMES'
|
||||
pass
|
||||
|
||||
|
||||
def p_pointer_3(t):
|
||||
'pointer : TIMES type_qualifier_list pointer'
|
||||
pass
|
||||
|
||||
|
||||
def p_pointer_4(t):
|
||||
'pointer : TIMES pointer'
|
||||
pass
|
||||
|
||||
# type-qualifier-list:
|
||||
|
||||
|
||||
def p_type_qualifier_list_1(t):
|
||||
'type_qualifier_list : type_qualifier'
|
||||
pass
|
||||
|
||||
|
||||
def p_type_qualifier_list_2(t):
|
||||
'type_qualifier_list : type_qualifier_list type_qualifier'
|
||||
pass
|
||||
|
||||
# parameter-type-list:
|
||||
|
||||
|
||||
def p_parameter_type_list_1(t):
|
||||
'parameter_type_list : parameter_list'
|
||||
pass
|
||||
|
||||
|
||||
def p_parameter_type_list_2(t):
|
||||
'parameter_type_list : parameter_list COMMA ELLIPSIS'
|
||||
pass
|
||||
|
||||
# parameter-list:
|
||||
|
||||
|
||||
def p_parameter_list_1(t):
|
||||
'parameter_list : parameter_declaration'
|
||||
pass
|
||||
|
||||
|
||||
def p_parameter_list_2(t):
|
||||
'parameter_list : parameter_list COMMA parameter_declaration'
|
||||
pass
|
||||
|
||||
# parameter-declaration:
|
||||
|
||||
|
||||
def p_parameter_declaration_1(t):
|
||||
'parameter_declaration : declaration_specifiers declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_parameter_declaration_2(t):
|
||||
'parameter_declaration : declaration_specifiers abstract_declarator_opt'
|
||||
pass
|
||||
|
||||
# identifier-list:
|
||||
|
||||
|
||||
def p_identifier_list_1(t):
|
||||
'identifier_list : ID'
|
||||
pass
|
||||
|
||||
|
||||
def p_identifier_list_2(t):
|
||||
'identifier_list : identifier_list COMMA ID'
|
||||
pass
|
||||
|
||||
# initializer:
|
||||
|
||||
|
||||
def p_initializer_1(t):
|
||||
'initializer : assignment_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_initializer_2(t):
|
||||
'''initializer : LBRACE initializer_list RBRACE
|
||||
| LBRACE initializer_list COMMA RBRACE'''
|
||||
@@ -372,84 +454,102 @@ def p_initializer_2(t):
|
||||
|
||||
# initializer-list:
|
||||
|
||||
|
||||
def p_initializer_list_1(t):
|
||||
'initializer_list : initializer'
|
||||
pass
|
||||
|
||||
|
||||
def p_initializer_list_2(t):
|
||||
'initializer_list : initializer_list COMMA initializer'
|
||||
pass
|
||||
|
||||
# type-name:
|
||||
|
||||
|
||||
def p_type_name(t):
|
||||
'type_name : specifier_qualifier_list abstract_declarator_opt'
|
||||
pass
|
||||
|
||||
|
||||
def p_abstract_declarator_opt_1(t):
|
||||
'abstract_declarator_opt : empty'
|
||||
pass
|
||||
|
||||
|
||||
def p_abstract_declarator_opt_2(t):
|
||||
'abstract_declarator_opt : abstract_declarator'
|
||||
pass
|
||||
|
||||
# abstract-declarator:
|
||||
|
||||
|
||||
def p_abstract_declarator_1(t):
|
||||
'abstract_declarator : pointer '
|
||||
pass
|
||||
|
||||
|
||||
def p_abstract_declarator_2(t):
|
||||
'abstract_declarator : pointer direct_abstract_declarator'
|
||||
pass
|
||||
|
||||
|
||||
def p_abstract_declarator_3(t):
|
||||
'abstract_declarator : direct_abstract_declarator'
|
||||
pass
|
||||
|
||||
# direct-abstract-declarator:
|
||||
|
||||
|
||||
def p_direct_abstract_declarator_1(t):
|
||||
'direct_abstract_declarator : LPAREN abstract_declarator RPAREN'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_abstract_declarator_2(t):
|
||||
'direct_abstract_declarator : direct_abstract_declarator LBRACKET constant_expression_opt RBRACKET'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_abstract_declarator_3(t):
|
||||
'direct_abstract_declarator : LBRACKET constant_expression_opt RBRACKET'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_abstract_declarator_4(t):
|
||||
'direct_abstract_declarator : direct_abstract_declarator LPAREN parameter_type_list_opt RPAREN'
|
||||
pass
|
||||
|
||||
|
||||
def p_direct_abstract_declarator_5(t):
|
||||
'direct_abstract_declarator : LPAREN parameter_type_list_opt RPAREN'
|
||||
pass
|
||||
|
||||
# Optional fields in abstract declarators
|
||||
|
||||
|
||||
def p_constant_expression_opt_1(t):
|
||||
'constant_expression_opt : empty'
|
||||
pass
|
||||
|
||||
|
||||
def p_constant_expression_opt_2(t):
|
||||
'constant_expression_opt : constant_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_parameter_type_list_opt_1(t):
|
||||
'parameter_type_list_opt : empty'
|
||||
pass
|
||||
|
||||
|
||||
def p_parameter_type_list_opt_2(t):
|
||||
'parameter_type_list_opt : parameter_type_list'
|
||||
pass
|
||||
|
||||
# statement:
|
||||
|
||||
|
||||
def p_statement(t):
|
||||
'''
|
||||
statement : labeled_statement
|
||||
@@ -463,124 +563,155 @@ def p_statement(t):
|
||||
|
||||
# labeled-statement:
|
||||
|
||||
|
||||
def p_labeled_statement_1(t):
|
||||
'labeled_statement : ID COLON statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_labeled_statement_2(t):
|
||||
'labeled_statement : CASE constant_expression COLON statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_labeled_statement_3(t):
|
||||
'labeled_statement : DEFAULT COLON statement'
|
||||
pass
|
||||
|
||||
# expression-statement:
|
||||
|
||||
|
||||
def p_expression_statement(t):
|
||||
'expression_statement : expression_opt SEMI'
|
||||
pass
|
||||
|
||||
# compound-statement:
|
||||
|
||||
|
||||
def p_compound_statement_1(t):
|
||||
'compound_statement : LBRACE declaration_list statement_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_compound_statement_2(t):
|
||||
'compound_statement : LBRACE statement_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_compound_statement_3(t):
|
||||
'compound_statement : LBRACE declaration_list RBRACE'
|
||||
pass
|
||||
|
||||
|
||||
def p_compound_statement_4(t):
|
||||
'compound_statement : LBRACE RBRACE'
|
||||
pass
|
||||
|
||||
# statement-list:
|
||||
|
||||
|
||||
def p_statement_list_1(t):
|
||||
'statement_list : statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_statement_list_2(t):
|
||||
'statement_list : statement_list statement'
|
||||
pass
|
||||
|
||||
# selection-statement
|
||||
|
||||
|
||||
def p_selection_statement_1(t):
|
||||
'selection_statement : IF LPAREN expression RPAREN statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_selection_statement_2(t):
|
||||
'selection_statement : IF LPAREN expression RPAREN statement ELSE statement '
|
||||
pass
|
||||
|
||||
|
||||
def p_selection_statement_3(t):
|
||||
'selection_statement : SWITCH LPAREN expression RPAREN statement '
|
||||
pass
|
||||
|
||||
# iteration_statement:
|
||||
|
||||
|
||||
def p_iteration_statement_1(t):
|
||||
'iteration_statement : WHILE LPAREN expression RPAREN statement'
|
||||
pass
|
||||
|
||||
|
||||
def p_iteration_statement_2(t):
|
||||
'iteration_statement : FOR LPAREN expression_opt SEMI expression_opt SEMI expression_opt RPAREN statement '
|
||||
pass
|
||||
|
||||
|
||||
def p_iteration_statement_3(t):
|
||||
'iteration_statement : DO statement WHILE LPAREN expression RPAREN SEMI'
|
||||
pass
|
||||
|
||||
# jump_statement:
|
||||
|
||||
|
||||
def p_jump_statement_1(t):
|
||||
'jump_statement : GOTO ID SEMI'
|
||||
pass
|
||||
|
||||
|
||||
def p_jump_statement_2(t):
|
||||
'jump_statement : CONTINUE SEMI'
|
||||
pass
|
||||
|
||||
|
||||
def p_jump_statement_3(t):
|
||||
'jump_statement : BREAK SEMI'
|
||||
pass
|
||||
|
||||
|
||||
def p_jump_statement_4(t):
|
||||
'jump_statement : RETURN expression_opt SEMI'
|
||||
pass
|
||||
|
||||
|
||||
def p_expression_opt_1(t):
|
||||
'expression_opt : empty'
|
||||
pass
|
||||
|
||||
|
||||
def p_expression_opt_2(t):
|
||||
'expression_opt : expression'
|
||||
pass
|
||||
|
||||
# expression:
|
||||
|
||||
|
||||
def p_expression_1(t):
|
||||
'expression : assignment_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_expression_2(t):
|
||||
'expression : expression COMMA assignment_expression'
|
||||
pass
|
||||
|
||||
# assigment_expression:
|
||||
|
||||
|
||||
def p_assignment_expression_1(t):
|
||||
'assignment_expression : conditional_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_assignment_expression_2(t):
|
||||
'assignment_expression : unary_expression assignment_operator assignment_expression'
|
||||
pass
|
||||
|
||||
# assignment_operator:
|
||||
|
||||
|
||||
def p_assignment_operator(t):
|
||||
'''
|
||||
assignment_operator : EQUALS
|
||||
@@ -598,66 +729,80 @@ def p_assignment_operator(t):
|
||||
pass
|
||||
|
||||
# conditional-expression
|
||||
|
||||
|
||||
def p_conditional_expression_1(t):
|
||||
'conditional_expression : logical_or_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_conditional_expression_2(t):
|
||||
'conditional_expression : logical_or_expression CONDOP expression COLON conditional_expression '
|
||||
pass
|
||||
|
||||
# constant-expression
|
||||
|
||||
|
||||
def p_constant_expression(t):
|
||||
'constant_expression : conditional_expression'
|
||||
pass
|
||||
|
||||
# logical-or-expression
|
||||
|
||||
|
||||
def p_logical_or_expression_1(t):
|
||||
'logical_or_expression : logical_and_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_logical_or_expression_2(t):
|
||||
'logical_or_expression : logical_or_expression LOR logical_and_expression'
|
||||
pass
|
||||
|
||||
# logical-and-expression
|
||||
|
||||
|
||||
def p_logical_and_expression_1(t):
|
||||
'logical_and_expression : inclusive_or_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_logical_and_expression_2(t):
|
||||
'logical_and_expression : logical_and_expression LAND inclusive_or_expression'
|
||||
pass
|
||||
|
||||
# inclusive-or-expression:
|
||||
|
||||
|
||||
def p_inclusive_or_expression_1(t):
|
||||
'inclusive_or_expression : exclusive_or_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_inclusive_or_expression_2(t):
|
||||
'inclusive_or_expression : inclusive_or_expression OR exclusive_or_expression'
|
||||
pass
|
||||
|
||||
# exclusive-or-expression:
|
||||
|
||||
|
||||
def p_exclusive_or_expression_1(t):
|
||||
'exclusive_or_expression : and_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_exclusive_or_expression_2(t):
|
||||
'exclusive_or_expression : exclusive_or_expression XOR and_expression'
|
||||
pass
|
||||
|
||||
# AND-expression
|
||||
|
||||
|
||||
def p_and_expression_1(t):
|
||||
'and_expression : equality_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_and_expression_2(t):
|
||||
'and_expression : and_expression AND equality_expression'
|
||||
pass
|
||||
@@ -668,10 +813,12 @@ def p_equality_expression_1(t):
|
||||
'equality_expression : relational_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_equality_expression_2(t):
|
||||
'equality_expression : equality_expression EQ relational_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_equality_expression_3(t):
|
||||
'equality_expression : equality_expression NE relational_expression'
|
||||
pass
|
||||
@@ -682,104 +829,129 @@ def p_relational_expression_1(t):
|
||||
'relational_expression : shift_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_relational_expression_2(t):
|
||||
'relational_expression : relational_expression LT shift_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_relational_expression_3(t):
|
||||
'relational_expression : relational_expression GT shift_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_relational_expression_4(t):
|
||||
'relational_expression : relational_expression LE shift_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_relational_expression_5(t):
|
||||
'relational_expression : relational_expression GE shift_expression'
|
||||
pass
|
||||
|
||||
# shift-expression
|
||||
|
||||
|
||||
def p_shift_expression_1(t):
|
||||
'shift_expression : additive_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_shift_expression_2(t):
|
||||
'shift_expression : shift_expression LSHIFT additive_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_shift_expression_3(t):
|
||||
'shift_expression : shift_expression RSHIFT additive_expression'
|
||||
pass
|
||||
|
||||
# additive-expression
|
||||
|
||||
|
||||
def p_additive_expression_1(t):
|
||||
'additive_expression : multiplicative_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_additive_expression_2(t):
|
||||
'additive_expression : additive_expression PLUS multiplicative_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_additive_expression_3(t):
|
||||
'additive_expression : additive_expression MINUS multiplicative_expression'
|
||||
pass
|
||||
|
||||
# multiplicative-expression
|
||||
|
||||
|
||||
def p_multiplicative_expression_1(t):
|
||||
'multiplicative_expression : cast_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_multiplicative_expression_2(t):
|
||||
'multiplicative_expression : multiplicative_expression TIMES cast_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_multiplicative_expression_3(t):
|
||||
'multiplicative_expression : multiplicative_expression DIVIDE cast_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_multiplicative_expression_4(t):
|
||||
'multiplicative_expression : multiplicative_expression MOD cast_expression'
|
||||
pass
|
||||
|
||||
# cast-expression:
|
||||
|
||||
|
||||
def p_cast_expression_1(t):
|
||||
'cast_expression : unary_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_cast_expression_2(t):
|
||||
'cast_expression : LPAREN type_name RPAREN cast_expression'
|
||||
pass
|
||||
|
||||
# unary-expression:
|
||||
|
||||
|
||||
def p_unary_expression_1(t):
|
||||
'unary_expression : postfix_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_unary_expression_2(t):
|
||||
'unary_expression : PLUSPLUS unary_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_unary_expression_3(t):
|
||||
'unary_expression : MINUSMINUS unary_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_unary_expression_4(t):
|
||||
'unary_expression : unary_operator cast_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_unary_expression_5(t):
|
||||
'unary_expression : SIZEOF unary_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_unary_expression_6(t):
|
||||
'unary_expression : SIZEOF LPAREN type_name RPAREN'
|
||||
pass
|
||||
|
||||
#unary-operator
|
||||
|
||||
# unary-operator
|
||||
|
||||
|
||||
def p_unary_operator(t):
|
||||
'''unary_operator : AND
|
||||
| TIMES
|
||||
@@ -790,39 +962,50 @@ def p_unary_operator(t):
|
||||
pass
|
||||
|
||||
# postfix-expression:
|
||||
|
||||
|
||||
def p_postfix_expression_1(t):
|
||||
'postfix_expression : primary_expression'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_2(t):
|
||||
'postfix_expression : postfix_expression LBRACKET expression RBRACKET'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_3(t):
|
||||
'postfix_expression : postfix_expression LPAREN argument_expression_list RPAREN'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_4(t):
|
||||
'postfix_expression : postfix_expression LPAREN RPAREN'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_5(t):
|
||||
'postfix_expression : postfix_expression PERIOD ID'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_6(t):
|
||||
'postfix_expression : postfix_expression ARROW ID'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_7(t):
|
||||
'postfix_expression : postfix_expression PLUSPLUS'
|
||||
pass
|
||||
|
||||
|
||||
def p_postfix_expression_8(t):
|
||||
'postfix_expression : postfix_expression MINUSMINUS'
|
||||
pass
|
||||
|
||||
# primary-expression:
|
||||
|
||||
|
||||
def p_primary_expression(t):
|
||||
'''primary_expression : ID
|
||||
| constant
|
||||
@@ -831,33 +1014,35 @@ def p_primary_expression(t):
|
||||
pass
|
||||
|
||||
# argument-expression-list:
|
||||
|
||||
|
||||
def p_argument_expression_list(t):
|
||||
'''argument_expression_list : assignment_expression
|
||||
| argument_expression_list COMMA assignment_expression'''
|
||||
pass
|
||||
|
||||
# constant:
|
||||
def p_constant(t):
|
||||
'''constant : ICONST
|
||||
| FCONST
|
||||
| CCONST'''
|
||||
pass
|
||||
|
||||
|
||||
def p_constant(t):
|
||||
'''constant : ICONST
|
||||
| FCONST
|
||||
| CCONST'''
|
||||
pass
|
||||
|
||||
|
||||
def p_empty(t):
|
||||
'empty : '
|
||||
pass
|
||||
|
||||
|
||||
def p_error(t):
|
||||
print("Whoa. We're hosed")
|
||||
|
||||
import profile
|
||||
# Build the grammar
|
||||
|
||||
yacc.yacc(method='LALR')
|
||||
yacc.yacc()
|
||||
#yacc.yacc(method='LALR',write_tables=False,debug=False)
|
||||
|
||||
#profile.run("yacc.yacc(method='LALR')")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -6,20 +6,21 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
)
|
||||
'NAME', 'NUMBER',
|
||||
)
|
||||
|
||||
literals = ['=','+','-','*','/', '(',')']
|
||||
literals = ['=', '+', '-', '*', '/', '(', ')']
|
||||
|
||||
# Tokens
|
||||
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
@@ -28,14 +29,16 @@ def t_NUMBER(t):
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
@@ -43,44 +46,55 @@ lex.lex()
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','+','-'),
|
||||
('left','*','/'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
('left', '+', '-'),
|
||||
('left', '*', '/'),
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
names = {}
|
||||
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : NAME "=" expression'
|
||||
names[p[1]] = p[3]
|
||||
|
||||
|
||||
def p_statement_expr(p):
|
||||
'statement : expression'
|
||||
print(p[1])
|
||||
|
||||
|
||||
def p_expression_binop(p):
|
||||
'''expression : expression '+' expression
|
||||
| expression '-' expression
|
||||
| expression '*' expression
|
||||
| expression '/' expression'''
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
if p[2] == '+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == '-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == '*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == '/':
|
||||
p[0] = p[1] / p[3]
|
||||
|
||||
|
||||
def p_expression_uminus(p):
|
||||
"expression : '-' expression %prec UMINUS"
|
||||
p[0] = -p[2]
|
||||
|
||||
|
||||
def p_expression_group(p):
|
||||
"expression : '(' expression ')'"
|
||||
p[0] = p[2]
|
||||
|
||||
|
||||
def p_expression_number(p):
|
||||
"expression : NUMBER"
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_expression_name(p):
|
||||
"expression : NAME"
|
||||
try:
|
||||
@@ -89,6 +103,7 @@ def p_expression_name(p):
|
||||
print("Undefined name '%s'" % p[1])
|
||||
p[0] = 0
|
||||
|
||||
|
||||
def p_error(p):
|
||||
if p:
|
||||
print("Syntax error at '%s'" % p.value)
|
||||
@@ -103,5 +118,6 @@ while 1:
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
if not s: continue
|
||||
if not s:
|
||||
continue
|
||||
yacc.parse(s)
|
||||
|
||||
@@ -6,20 +6,21 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
)
|
||||
'NAME', 'NUMBER',
|
||||
)
|
||||
|
||||
literals = ['=','+','-','*','/', '(',')']
|
||||
literals = ['=', '+', '-', '*', '/', '(', ')']
|
||||
|
||||
# Tokens
|
||||
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
@@ -28,14 +29,16 @@ def t_NUMBER(t):
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
@@ -43,44 +46,55 @@ lex.lex()
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','+','-'),
|
||||
('left','*','/'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
('left', '+', '-'),
|
||||
('left', '*', '/'),
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
names = {}
|
||||
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : NAME "=" expression'
|
||||
names[p[1]] = p[3]
|
||||
|
||||
|
||||
def p_statement_expr(p):
|
||||
'statement : expression'
|
||||
print(p[1])
|
||||
|
||||
|
||||
def p_expression_binop(p):
|
||||
'''expression : expression '+' expression
|
||||
| expression '-' expression
|
||||
| expression '*' expression
|
||||
| expression '/' expression'''
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
if p[2] == '+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == '-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == '*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == '/':
|
||||
p[0] = p[1] / p[3]
|
||||
|
||||
|
||||
def p_expression_uminus(p):
|
||||
"expression : '-' expression %prec UMINUS"
|
||||
p[0] = -p[2]
|
||||
|
||||
|
||||
def p_expression_group(p):
|
||||
"expression : '(' expression ')'"
|
||||
p[0] = p[2]
|
||||
|
||||
|
||||
def p_expression_number(p):
|
||||
"expression : NUMBER"
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_expression_name(p):
|
||||
"expression : NAME"
|
||||
try:
|
||||
@@ -89,6 +103,7 @@ def p_expression_name(p):
|
||||
print("Undefined name '%s'" % p[1])
|
||||
p[0] = 0
|
||||
|
||||
|
||||
def p_error(p):
|
||||
if p:
|
||||
print("Syntax error at '%s'" % p.value)
|
||||
@@ -109,5 +124,6 @@ while 1:
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
if not s: continue
|
||||
yacc.parse(s,debug=logging.getLogger())
|
||||
if not s:
|
||||
continue
|
||||
yacc.parse(s, debug=logging.getLogger())
|
||||
|
||||
132
ext/ply/example/calceof/calc.py
Normal file
132
ext/ply/example/calceof/calc.py
Normal file
@@ -0,0 +1,132 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calc.py
|
||||
#
|
||||
# A simple calculator with variables. Asks the user for more input and
|
||||
# demonstrates the use of the t_eof() rule.
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
|
||||
tokens = (
|
||||
'NAME', 'NUMBER',
|
||||
)
|
||||
|
||||
literals = ['=', '+', '-', '*', '/', '(', ')']
|
||||
|
||||
# Tokens
|
||||
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
t.value = int(t.value)
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
def t_eof(t):
|
||||
more = raw_input('... ')
|
||||
if more:
|
||||
t.lexer.input(more + '\n')
|
||||
return t.lexer.token()
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left', '+', '-'),
|
||||
('left', '*', '/'),
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = {}
|
||||
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : NAME "=" expression'
|
||||
names[p[1]] = p[3]
|
||||
|
||||
|
||||
def p_statement_expr(p):
|
||||
'statement : expression'
|
||||
print(p[1])
|
||||
|
||||
|
||||
def p_expression_binop(p):
|
||||
'''expression : expression '+' expression
|
||||
| expression '-' expression
|
||||
| expression '*' expression
|
||||
| expression '/' expression'''
|
||||
if p[2] == '+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == '-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == '*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == '/':
|
||||
p[0] = p[1] / p[3]
|
||||
|
||||
|
||||
def p_expression_uminus(p):
|
||||
"expression : '-' expression %prec UMINUS"
|
||||
p[0] = -p[2]
|
||||
|
||||
|
||||
def p_expression_group(p):
|
||||
"expression : '(' expression ')'"
|
||||
p[0] = p[2]
|
||||
|
||||
|
||||
def p_expression_number(p):
|
||||
"expression : NUMBER"
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_expression_name(p):
|
||||
"expression : NAME"
|
||||
try:
|
||||
p[0] = names[p[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % p[1])
|
||||
p[0] = 0
|
||||
|
||||
|
||||
def p_error(p):
|
||||
if p:
|
||||
print("Syntax error at '%s'" % p.value)
|
||||
else:
|
||||
print("Syntax error at EOF")
|
||||
|
||||
import ply.yacc as yacc
|
||||
yacc.yacc()
|
||||
|
||||
while 1:
|
||||
try:
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
if not s:
|
||||
continue
|
||||
yacc.parse(s + '\n')
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python2.7
|
||||
#!/usr/bin/env python
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# calc.py
|
||||
@@ -10,7 +10,7 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
@@ -19,6 +19,7 @@ import ply.lex as lex
|
||||
import ply.yacc as yacc
|
||||
import os
|
||||
|
||||
|
||||
class Parser:
|
||||
"""
|
||||
Base class for a lexer/parser that has the rules defined as methods
|
||||
@@ -28,14 +29,15 @@ class Parser:
|
||||
|
||||
def __init__(self, **kw):
|
||||
self.debug = kw.get('debug', 0)
|
||||
self.names = { }
|
||||
self.names = {}
|
||||
try:
|
||||
modname = os.path.split(os.path.splitext(__file__)[0])[1] + "_" + self.__class__.__name__
|
||||
modname = os.path.split(os.path.splitext(__file__)[0])[
|
||||
1] + "_" + self.__class__.__name__
|
||||
except:
|
||||
modname = "parser"+"_"+self.__class__.__name__
|
||||
modname = "parser" + "_" + self.__class__.__name__
|
||||
self.debugfile = modname + ".dbg"
|
||||
self.tabmodule = modname + "_" + "parsetab"
|
||||
#print self.debugfile, self.tabmodule
|
||||
# print self.debugfile, self.tabmodule
|
||||
|
||||
# Build the lexer and parser
|
||||
lex.lex(module=self, debug=self.debug)
|
||||
@@ -50,29 +52,30 @@ class Parser:
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
if not s: continue
|
||||
if not s:
|
||||
continue
|
||||
yacc.parse(s)
|
||||
|
||||
|
||||
|
||||
class Calc(Parser):
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','EXP', 'TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
'NAME', 'NUMBER',
|
||||
'PLUS', 'MINUS', 'EXP', 'TIMES', 'DIVIDE', 'EQUALS',
|
||||
'LPAREN', 'RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_EXP = r'\*\*'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_EXP = r'\*\*'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(self, t):
|
||||
r'\d+'
|
||||
@@ -81,7 +84,7 @@ class Calc(Parser):
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
#print "parsed number %s" % repr(t.value)
|
||||
# print "parsed number %s" % repr(t.value)
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
@@ -89,7 +92,7 @@ class Calc(Parser):
|
||||
def t_newline(self, t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
def t_error(self, t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
@@ -97,11 +100,11 @@ class Calc(Parser):
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('left', 'PLUS', 'MINUS'),
|
||||
('left', 'TIMES', 'DIVIDE'),
|
||||
('left', 'EXP'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
def p_statement_assign(self, p):
|
||||
'statement : NAME EQUALS expression'
|
||||
@@ -119,12 +122,17 @@ class Calc(Parser):
|
||||
| expression DIVIDE expression
|
||||
| expression EXP expression
|
||||
"""
|
||||
#print [repr(p[i]) for i in range(0,4)]
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
elif p[2] == '**': p[0] = p[1] ** p[3]
|
||||
# print [repr(p[i]) for i in range(0,4)]
|
||||
if p[2] == '+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == '-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == '*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == '/':
|
||||
p[0] = p[1] / p[3]
|
||||
elif p[2] == '**':
|
||||
p[0] = p[1] ** p[3]
|
||||
|
||||
def p_expression_uminus(self, p):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
|
||||
@@ -2,37 +2,38 @@
|
||||
# calc.py
|
||||
#
|
||||
# A calculator parser that makes use of closures. The function make_calculator()
|
||||
# returns a function that accepts an input string and returns a result. All
|
||||
# returns a function that accepts an input string and returns a result. All
|
||||
# lexing rules, parsing rules, and internal state are held inside the function.
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
|
||||
# Make a calculator function
|
||||
|
||||
|
||||
def make_calculator():
|
||||
import ply.lex as lex
|
||||
import ply.yacc as yacc
|
||||
|
||||
# ------- Internal calculator state
|
||||
|
||||
variables = { } # Dictionary of stored variables
|
||||
variables = {} # Dictionary of stored variables
|
||||
|
||||
# ------- Calculator tokenizing rules
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'NAME', 'NUMBER',
|
||||
)
|
||||
|
||||
literals = ['=','+','-','*','/', '(',')']
|
||||
literals = ['=', '+', '-', '*', '/', '(', ')']
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
@@ -42,20 +43,20 @@ def make_calculator():
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
|
||||
# Build the lexer
|
||||
lexer = lex.lex()
|
||||
|
||||
# ------- Calculator parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','+','-'),
|
||||
('left','*','/'),
|
||||
('right','UMINUS'),
|
||||
('left', '+', '-'),
|
||||
('left', '*', '/'),
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
def p_statement_assign(p):
|
||||
@@ -72,10 +73,14 @@ def make_calculator():
|
||||
| expression '-' expression
|
||||
| expression '*' expression
|
||||
| expression '/' expression'''
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
if p[2] == '+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == '-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == '*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == '/':
|
||||
p[0] = p[1] / p[3]
|
||||
|
||||
def p_expression_uminus(p):
|
||||
"expression : '-' expression %prec UMINUS"
|
||||
@@ -103,14 +108,13 @@ def make_calculator():
|
||||
else:
|
||||
print("Syntax error at EOF")
|
||||
|
||||
|
||||
# Build the parser
|
||||
parser = yacc.yacc()
|
||||
|
||||
# ------- Input function
|
||||
|
||||
# ------- Input function
|
||||
|
||||
def input(text):
|
||||
result = parser.parse(text,lexer=lexer)
|
||||
result = parser.parse(text, lexer=lexer)
|
||||
return result
|
||||
|
||||
return input
|
||||
@@ -126,5 +130,3 @@ while True:
|
||||
r = calc(s)
|
||||
if r:
|
||||
print(r)
|
||||
|
||||
|
||||
|
||||
@@ -15,34 +15,34 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
|
||||
tokens = (
|
||||
'H_EDIT_DESCRIPTOR',
|
||||
)
|
||||
)
|
||||
|
||||
# Tokens
|
||||
t_ignore = " \t\n"
|
||||
|
||||
|
||||
def t_H_EDIT_DESCRIPTOR(t):
|
||||
r"\d+H.*" # This grabs all of the remaining text
|
||||
i = t.value.index('H')
|
||||
n = eval(t.value[:i])
|
||||
|
||||
|
||||
# Adjust the tokenizing position
|
||||
t.lexer.lexpos -= len(t.value) - (i+1+n)
|
||||
|
||||
t.value = t.value[i+1:i+1+n]
|
||||
return t
|
||||
|
||||
t.lexer.lexpos -= len(t.value) - (i + 1 + n)
|
||||
|
||||
t.value = t.value[i + 1:i + 1 + n]
|
||||
return t
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
lex.runmain()
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python2.7
|
||||
#!/usr/bin/env python
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# calc.py
|
||||
@@ -12,7 +12,7 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
@@ -21,6 +21,7 @@ import ply.lex as lex
|
||||
import ply.yacc as yacc
|
||||
import os
|
||||
|
||||
|
||||
class Parser(object):
|
||||
"""
|
||||
Base class for a lexer/parser that has the rules defined as methods
|
||||
@@ -28,17 +29,17 @@ class Parser(object):
|
||||
tokens = ()
|
||||
precedence = ()
|
||||
|
||||
|
||||
def __init__(self, **kw):
|
||||
self.debug = kw.get('debug', 0)
|
||||
self.names = { }
|
||||
self.names = {}
|
||||
try:
|
||||
modname = os.path.split(os.path.splitext(__file__)[0])[1] + "_" + self.__class__.__name__
|
||||
modname = os.path.split(os.path.splitext(__file__)[0])[
|
||||
1] + "_" + self.__class__.__name__
|
||||
except:
|
||||
modname = "parser"+"_"+self.__class__.__name__
|
||||
modname = "parser" + "_" + self.__class__.__name__
|
||||
self.debugfile = modname + ".dbg"
|
||||
self.tabmodule = modname + "_" + "parsetab"
|
||||
#print self.debugfile, self.tabmodule
|
||||
# print self.debugfile, self.tabmodule
|
||||
|
||||
# Build the lexer and parser
|
||||
lex.lex(module=self, debug=self.debug)
|
||||
@@ -53,29 +54,30 @@ class Parser(object):
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
if not s: continue
|
||||
if not s:
|
||||
continue
|
||||
yacc.parse(s)
|
||||
|
||||
|
||||
|
||||
class Calc(Parser):
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','EXP', 'TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
'NAME', 'NUMBER',
|
||||
'PLUS', 'MINUS', 'EXP', 'TIMES', 'DIVIDE', 'EQUALS',
|
||||
'LPAREN', 'RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_EXP = r'\*\*'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_EXP = r'\*\*'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(self, t):
|
||||
r'\d+'
|
||||
@@ -84,7 +86,7 @@ class Calc(Parser):
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
#print "parsed number %s" % repr(t.value)
|
||||
# print "parsed number %s" % repr(t.value)
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
@@ -92,7 +94,7 @@ class Calc(Parser):
|
||||
def t_newline(self, t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
def t_error(self, t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
@@ -100,11 +102,11 @@ class Calc(Parser):
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('left', 'PLUS', 'MINUS'),
|
||||
('left', 'TIMES', 'DIVIDE'),
|
||||
('left', 'EXP'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
def p_statement_assign(self, p):
|
||||
'statement : NAME EQUALS expression'
|
||||
@@ -122,12 +124,17 @@ class Calc(Parser):
|
||||
| expression DIVIDE expression
|
||||
| expression EXP expression
|
||||
"""
|
||||
#print [repr(p[i]) for i in range(0,4)]
|
||||
if p[2] == '+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == '-': p[0] = p[1] - p[3]
|
||||
elif p[2] == '*': p[0] = p[1] * p[3]
|
||||
elif p[2] == '/': p[0] = p[1] / p[3]
|
||||
elif p[2] == '**': p[0] = p[1] ** p[3]
|
||||
# print [repr(p[i]) for i in range(0,4)]
|
||||
if p[2] == '+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == '-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == '*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == '/':
|
||||
p[0] = p[1] / p[3]
|
||||
elif p[2] == '**':
|
||||
p[0] = p[1] ** p[3]
|
||||
|
||||
def p_expression_uminus(self, p):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
|
||||
@@ -6,27 +6,28 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
raw_input = input
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
'NAME', 'NUMBER',
|
||||
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'EQUALS',
|
||||
'LPAREN', 'RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
@@ -39,14 +40,16 @@ def t_NUMBER(t):
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex(optimize=1)
|
||||
@@ -54,45 +57,57 @@ lex.lex(optimize=1)
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
('left', 'PLUS', 'MINUS'),
|
||||
('left', 'TIMES', 'DIVIDE'),
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
names = {}
|
||||
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
print(t[1])
|
||||
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
elif t[2] == '<': t[0] = t[1] < t[3]
|
||||
if t[2] == '+':
|
||||
t[0] = t[1] + t[3]
|
||||
elif t[2] == '-':
|
||||
t[0] = t[1] - t[3]
|
||||
elif t[2] == '*':
|
||||
t[0] = t[1] * t[3]
|
||||
elif t[2] == '/':
|
||||
t[0] = t[1] / t[3]
|
||||
elif t[2] == '<':
|
||||
t[0] = t[1] < t[3]
|
||||
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
@@ -101,6 +116,7 @@ def p_expression_name(t):
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
|
||||
def p_error(t):
|
||||
if t:
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
@@ -116,4 +132,3 @@ while 1:
|
||||
except EOFError:
|
||||
break
|
||||
yacc.parse(s)
|
||||
|
||||
|
||||
@@ -8,24 +8,25 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
'NAME', 'NUMBER',
|
||||
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'EQUALS',
|
||||
'LPAREN', 'RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = ur'\+'
|
||||
t_MINUS = ur'-'
|
||||
t_TIMES = ur'\*'
|
||||
t_DIVIDE = ur'/'
|
||||
t_EQUALS = ur'='
|
||||
t_LPAREN = ur'\('
|
||||
t_RPAREN = ur'\)'
|
||||
t_NAME = ur'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
t_PLUS = ur'\+'
|
||||
t_MINUS = ur'-'
|
||||
t_TIMES = ur'\*'
|
||||
t_DIVIDE = ur'/'
|
||||
t_EQUALS = ur'='
|
||||
t_LPAREN = ur'\('
|
||||
t_RPAREN = ur'\)'
|
||||
t_NAME = ur'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
|
||||
def t_NUMBER(t):
|
||||
ur'\d+'
|
||||
@@ -38,14 +39,16 @@ def t_NUMBER(t):
|
||||
|
||||
t_ignore = u" \t"
|
||||
|
||||
|
||||
def t_newline(t):
|
||||
ur'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print "Illegal character '%s'" % t.value[0]
|
||||
t.lexer.skip(1)
|
||||
|
||||
|
||||
# Build the lexer
|
||||
import ply.lex as lex
|
||||
lex.lex()
|
||||
@@ -53,44 +56,55 @@ lex.lex()
|
||||
# Parsing rules
|
||||
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
('left', 'PLUS', 'MINUS'),
|
||||
('left', 'TIMES', 'DIVIDE'),
|
||||
('right', 'UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
names = {}
|
||||
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[p[1]] = p[3]
|
||||
|
||||
|
||||
def p_statement_expr(p):
|
||||
'statement : expression'
|
||||
print p[1]
|
||||
|
||||
|
||||
def p_expression_binop(p):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if p[2] == u'+' : p[0] = p[1] + p[3]
|
||||
elif p[2] == u'-': p[0] = p[1] - p[3]
|
||||
elif p[2] == u'*': p[0] = p[1] * p[3]
|
||||
elif p[2] == u'/': p[0] = p[1] / p[3]
|
||||
if p[2] == u'+':
|
||||
p[0] = p[1] + p[3]
|
||||
elif p[2] == u'-':
|
||||
p[0] = p[1] - p[3]
|
||||
elif p[2] == u'*':
|
||||
p[0] = p[1] * p[3]
|
||||
elif p[2] == u'/':
|
||||
p[0] = p[1] / p[3]
|
||||
|
||||
|
||||
def p_expression_uminus(p):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
p[0] = -p[2]
|
||||
|
||||
|
||||
def p_expression_group(p):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
p[0] = p[2]
|
||||
|
||||
|
||||
def p_expression_number(p):
|
||||
'expression : NUMBER'
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_expression_name(p):
|
||||
'expression : NAME'
|
||||
try:
|
||||
@@ -99,6 +113,7 @@ def p_expression_name(p):
|
||||
print "Undefined name '%s'" % p[1]
|
||||
p[0] = 0
|
||||
|
||||
|
||||
def p_error(p):
|
||||
if p:
|
||||
print "Syntax error at '%s'" % p.value
|
||||
@@ -113,5 +128,6 @@ while 1:
|
||||
s = raw_input('calc > ')
|
||||
except EOFError:
|
||||
break
|
||||
if not s: continue
|
||||
if not s:
|
||||
continue
|
||||
yacc.parse(unicode(s))
|
||||
|
||||
@@ -9,104 +9,111 @@ sys.path.append("../..")
|
||||
from ply import *
|
||||
|
||||
tokens = (
|
||||
'LITERAL','SECTION','TOKEN','LEFT','RIGHT','PREC','START','TYPE','NONASSOC','UNION','CODE',
|
||||
'ID','QLITERAL','NUMBER',
|
||||
'LITERAL', 'SECTION', 'TOKEN', 'LEFT', 'RIGHT', 'PREC', 'START', 'TYPE', 'NONASSOC', 'UNION', 'CODE',
|
||||
'ID', 'QLITERAL', 'NUMBER',
|
||||
)
|
||||
|
||||
states = (('code','exclusive'),)
|
||||
states = (('code', 'exclusive'),)
|
||||
|
||||
literals = [ ';', ',', '<', '>', '|',':' ]
|
||||
literals = [';', ',', '<', '>', '|', ':']
|
||||
t_ignore = ' \t'
|
||||
|
||||
t_TOKEN = r'%token'
|
||||
t_LEFT = r'%left'
|
||||
t_RIGHT = r'%right'
|
||||
t_NONASSOC = r'%nonassoc'
|
||||
t_PREC = r'%prec'
|
||||
t_START = r'%start'
|
||||
t_TYPE = r'%type'
|
||||
t_UNION = r'%union'
|
||||
t_ID = r'[a-zA-Z_][a-zA-Z_0-9]*'
|
||||
t_TOKEN = r'%token'
|
||||
t_LEFT = r'%left'
|
||||
t_RIGHT = r'%right'
|
||||
t_NONASSOC = r'%nonassoc'
|
||||
t_PREC = r'%prec'
|
||||
t_START = r'%start'
|
||||
t_TYPE = r'%type'
|
||||
t_UNION = r'%union'
|
||||
t_ID = r'[a-zA-Z_][a-zA-Z_0-9]*'
|
||||
t_QLITERAL = r'''(?P<quote>['"]).*?(?P=quote)'''
|
||||
t_NUMBER = r'\d+'
|
||||
t_NUMBER = r'\d+'
|
||||
|
||||
|
||||
def t_SECTION(t):
|
||||
r'%%'
|
||||
if getattr(t.lexer,"lastsection",0):
|
||||
t.value = t.lexer.lexdata[t.lexpos+2:]
|
||||
t.lexer.lexpos = len(t.lexer.lexdata)
|
||||
if getattr(t.lexer, "lastsection", 0):
|
||||
t.value = t.lexer.lexdata[t.lexpos + 2:]
|
||||
t.lexer.lexpos = len(t.lexer.lexdata)
|
||||
else:
|
||||
t.lexer.lastsection = 0
|
||||
t.lexer.lastsection = 0
|
||||
return t
|
||||
|
||||
# Comments
|
||||
|
||||
|
||||
def t_ccomment(t):
|
||||
r'/\*(.|\n)*?\*/'
|
||||
t.lexer.lineno += t.value.count('\n')
|
||||
|
||||
t_ignore_cppcomment = r'//.*'
|
||||
|
||||
|
||||
def t_LITERAL(t):
|
||||
r'%\{(.|\n)*?%\}'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
return t
|
||||
r'%\{(.|\n)*?%\}'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
return t
|
||||
|
||||
|
||||
def t_NEWLINE(t):
|
||||
r'\n'
|
||||
t.lexer.lineno += 1
|
||||
r'\n'
|
||||
t.lexer.lineno += 1
|
||||
|
||||
|
||||
def t_code(t):
|
||||
r'\{'
|
||||
t.lexer.codestart = t.lexpos
|
||||
t.lexer.level = 1
|
||||
t.lexer.begin('code')
|
||||
r'\{'
|
||||
t.lexer.codestart = t.lexpos
|
||||
t.lexer.level = 1
|
||||
t.lexer.begin('code')
|
||||
|
||||
|
||||
def t_code_ignore_string(t):
|
||||
r'\"([^\\\n]|(\\.))*?\"'
|
||||
|
||||
|
||||
def t_code_ignore_char(t):
|
||||
r'\'([^\\\n]|(\\.))*?\''
|
||||
|
||||
|
||||
def t_code_ignore_comment(t):
|
||||
r'/\*(.|\n)*?\*/'
|
||||
r'/\*(.|\n)*?\*/'
|
||||
|
||||
|
||||
def t_code_ignore_cppcom(t):
|
||||
r'//.*'
|
||||
r'//.*'
|
||||
|
||||
|
||||
def t_code_lbrace(t):
|
||||
r'\{'
|
||||
t.lexer.level += 1
|
||||
|
||||
|
||||
def t_code_rbrace(t):
|
||||
r'\}'
|
||||
t.lexer.level -= 1
|
||||
if t.lexer.level == 0:
|
||||
t.type = 'CODE'
|
||||
t.value = t.lexer.lexdata[t.lexer.codestart:t.lexpos+1]
|
||||
t.lexer.begin('INITIAL')
|
||||
t.lexer.lineno += t.value.count('\n')
|
||||
return t
|
||||
t.type = 'CODE'
|
||||
t.value = t.lexer.lexdata[t.lexer.codestart:t.lexpos + 1]
|
||||
t.lexer.begin('INITIAL')
|
||||
t.lexer.lineno += t.value.count('\n')
|
||||
return t
|
||||
|
||||
t_code_ignore_nonspace = r'[^\s\}\'\"\{]+'
|
||||
t_code_ignore_nonspace = r'[^\s\}\'\"\{]+'
|
||||
t_code_ignore_whitespace = r'\s+'
|
||||
t_code_ignore = ""
|
||||
|
||||
|
||||
def t_code_error(t):
|
||||
raise RuntimeError
|
||||
|
||||
|
||||
def t_error(t):
|
||||
print "%d: Illegal character '%s'" % (t.lexer.lineno, t.value[0])
|
||||
print t.value
|
||||
print("%d: Illegal character '%s'" % (t.lexer.lineno, t.value[0]))
|
||||
print(t.value)
|
||||
t.lexer.skip(1)
|
||||
|
||||
lex.lex()
|
||||
|
||||
if __name__ == '__main__':
|
||||
lex.runmain()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -9,53 +9,61 @@ tokens = ylex.tokens
|
||||
from ply import *
|
||||
|
||||
tokenlist = []
|
||||
preclist = []
|
||||
preclist = []
|
||||
|
||||
emit_code = 1
|
||||
|
||||
|
||||
def p_yacc(p):
|
||||
'''yacc : defsection rulesection'''
|
||||
|
||||
|
||||
def p_defsection(p):
|
||||
'''defsection : definitions SECTION
|
||||
| SECTION'''
|
||||
p.lexer.lastsection = 1
|
||||
print "tokens = ", repr(tokenlist)
|
||||
print
|
||||
print "precedence = ", repr(preclist)
|
||||
print
|
||||
print "# -------------- RULES ----------------"
|
||||
print
|
||||
print("tokens = ", repr(tokenlist))
|
||||
print()
|
||||
print("precedence = ", repr(preclist))
|
||||
print()
|
||||
print("# -------------- RULES ----------------")
|
||||
print()
|
||||
|
||||
|
||||
def p_rulesection(p):
|
||||
'''rulesection : rules SECTION'''
|
||||
|
||||
print "# -------------- RULES END ----------------"
|
||||
print_code(p[2],0)
|
||||
print("# -------------- RULES END ----------------")
|
||||
print_code(p[2], 0)
|
||||
|
||||
|
||||
def p_definitions(p):
|
||||
'''definitions : definitions definition
|
||||
| definition'''
|
||||
|
||||
|
||||
def p_definition_literal(p):
|
||||
'''definition : LITERAL'''
|
||||
print_code(p[1],0)
|
||||
print_code(p[1], 0)
|
||||
|
||||
|
||||
def p_definition_start(p):
|
||||
'''definition : START ID'''
|
||||
print "start = '%s'" % p[2]
|
||||
print("start = '%s'" % p[2])
|
||||
|
||||
|
||||
def p_definition_token(p):
|
||||
'''definition : toktype opttype idlist optsemi '''
|
||||
for i in p[3]:
|
||||
if i[0] not in "'\"":
|
||||
tokenlist.append(i)
|
||||
if i[0] not in "'\"":
|
||||
tokenlist.append(i)
|
||||
if p[1] == '%left':
|
||||
preclist.append(('left',) + tuple(p[3]))
|
||||
elif p[1] == '%right':
|
||||
preclist.append(('right',) + tuple(p[3]))
|
||||
elif p[1] == '%nonassoc':
|
||||
preclist.append(('nonassoc',)+ tuple(p[3]))
|
||||
preclist.append(('nonassoc',) + tuple(p[3]))
|
||||
|
||||
|
||||
def p_toktype(p):
|
||||
'''toktype : TOKEN
|
||||
@@ -64,10 +72,12 @@ def p_toktype(p):
|
||||
| NONASSOC'''
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_opttype(p):
|
||||
'''opttype : '<' ID '>'
|
||||
| empty'''
|
||||
|
||||
|
||||
def p_idlist(p):
|
||||
'''idlist : idlist optcomma tokenid
|
||||
| tokenid'''
|
||||
@@ -77,141 +87,158 @@ def p_idlist(p):
|
||||
p[0] = p[1]
|
||||
p[1].append(p[3])
|
||||
|
||||
|
||||
def p_tokenid(p):
|
||||
'''tokenid : ID
|
||||
| ID NUMBER
|
||||
| QLITERAL
|
||||
| QLITERAL NUMBER'''
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
|
||||
def p_optsemi(p):
|
||||
'''optsemi : ';'
|
||||
| empty'''
|
||||
|
||||
|
||||
def p_optcomma(p):
|
||||
'''optcomma : ','
|
||||
| empty'''
|
||||
|
||||
|
||||
def p_definition_type(p):
|
||||
'''definition : TYPE '<' ID '>' namelist optsemi'''
|
||||
# type declarations are ignored
|
||||
|
||||
|
||||
def p_namelist(p):
|
||||
'''namelist : namelist optcomma ID
|
||||
| ID'''
|
||||
|
||||
|
||||
def p_definition_union(p):
|
||||
'''definition : UNION CODE optsemi'''
|
||||
# Union declarations are ignored
|
||||
|
||||
|
||||
def p_rules(p):
|
||||
'''rules : rules rule
|
||||
| rule'''
|
||||
if len(p) == 2:
|
||||
rule = p[1]
|
||||
rule = p[1]
|
||||
else:
|
||||
rule = p[2]
|
||||
rule = p[2]
|
||||
|
||||
# Print out a Python equivalent of this rule
|
||||
|
||||
embedded = [ ] # Embedded actions (a mess)
|
||||
embedded = [] # Embedded actions (a mess)
|
||||
embed_count = 0
|
||||
|
||||
rulename = rule[0]
|
||||
rulecount = 1
|
||||
for r in rule[1]:
|
||||
# r contains one of the rule possibilities
|
||||
print "def p_%s_%d(p):" % (rulename,rulecount)
|
||||
print("def p_%s_%d(p):" % (rulename, rulecount))
|
||||
prod = []
|
||||
prodcode = ""
|
||||
for i in range(len(r)):
|
||||
item = r[i]
|
||||
if item[0] == '{': # A code block
|
||||
if i == len(r) - 1:
|
||||
prodcode = item
|
||||
break
|
||||
else:
|
||||
# an embedded action
|
||||
embed_name = "_embed%d_%s" % (embed_count,rulename)
|
||||
prod.append(embed_name)
|
||||
embedded.append((embed_name,item))
|
||||
embed_count += 1
|
||||
else:
|
||||
prod.append(item)
|
||||
print " '''%s : %s'''" % (rulename, " ".join(prod))
|
||||
item = r[i]
|
||||
if item[0] == '{': # A code block
|
||||
if i == len(r) - 1:
|
||||
prodcode = item
|
||||
break
|
||||
else:
|
||||
# an embedded action
|
||||
embed_name = "_embed%d_%s" % (embed_count, rulename)
|
||||
prod.append(embed_name)
|
||||
embedded.append((embed_name, item))
|
||||
embed_count += 1
|
||||
else:
|
||||
prod.append(item)
|
||||
print(" '''%s : %s'''" % (rulename, " ".join(prod)))
|
||||
# Emit code
|
||||
print_code(prodcode,4)
|
||||
print
|
||||
print_code(prodcode, 4)
|
||||
print()
|
||||
rulecount += 1
|
||||
|
||||
for e,code in embedded:
|
||||
print "def p_%s(p):" % e
|
||||
print " '''%s : '''" % e
|
||||
print_code(code,4)
|
||||
print
|
||||
for e, code in embedded:
|
||||
print("def p_%s(p):" % e)
|
||||
print(" '''%s : '''" % e)
|
||||
print_code(code, 4)
|
||||
print()
|
||||
|
||||
|
||||
def p_rule(p):
|
||||
'''rule : ID ':' rulelist ';' '''
|
||||
p[0] = (p[1],[p[3]])
|
||||
'''rule : ID ':' rulelist ';' '''
|
||||
p[0] = (p[1], [p[3]])
|
||||
|
||||
|
||||
def p_rule2(p):
|
||||
'''rule : ID ':' rulelist morerules ';' '''
|
||||
p[4].insert(0,p[3])
|
||||
p[0] = (p[1],p[4])
|
||||
'''rule : ID ':' rulelist morerules ';' '''
|
||||
p[4].insert(0, p[3])
|
||||
p[0] = (p[1], p[4])
|
||||
|
||||
|
||||
def p_rule_empty(p):
|
||||
'''rule : ID ':' ';' '''
|
||||
p[0] = (p[1],[[]])
|
||||
'''rule : ID ':' ';' '''
|
||||
p[0] = (p[1], [[]])
|
||||
|
||||
|
||||
def p_rule_empty2(p):
|
||||
'''rule : ID ':' morerules ';' '''
|
||||
|
||||
p[3].insert(0,[])
|
||||
p[0] = (p[1],p[3])
|
||||
'''rule : ID ':' morerules ';' '''
|
||||
|
||||
p[3].insert(0, [])
|
||||
p[0] = (p[1], p[3])
|
||||
|
||||
|
||||
def p_morerules(p):
|
||||
'''morerules : morerules '|' rulelist
|
||||
| '|' rulelist
|
||||
| '|' '''
|
||||
|
||||
if len(p) == 2:
|
||||
p[0] = [[]]
|
||||
elif len(p) == 3:
|
||||
p[0] = [p[2]]
|
||||
else:
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
'''morerules : morerules '|' rulelist
|
||||
| '|' rulelist
|
||||
| '|' '''
|
||||
|
||||
if len(p) == 2:
|
||||
p[0] = [[]]
|
||||
elif len(p) == 3:
|
||||
p[0] = [p[2]]
|
||||
else:
|
||||
p[0] = p[1]
|
||||
p[0].append(p[3])
|
||||
|
||||
# print("morerules", len(p), p[0])
|
||||
|
||||
# print "morerules", len(p), p[0]
|
||||
|
||||
def p_rulelist(p):
|
||||
'''rulelist : rulelist ruleitem
|
||||
| ruleitem'''
|
||||
'''rulelist : rulelist ruleitem
|
||||
| ruleitem'''
|
||||
|
||||
if len(p) == 2:
|
||||
if len(p) == 2:
|
||||
p[0] = [p[1]]
|
||||
else:
|
||||
else:
|
||||
p[0] = p[1]
|
||||
p[1].append(p[2])
|
||||
|
||||
|
||||
def p_ruleitem(p):
|
||||
'''ruleitem : ID
|
||||
| QLITERAL
|
||||
| CODE
|
||||
| PREC'''
|
||||
p[0] = p[1]
|
||||
'''ruleitem : ID
|
||||
| QLITERAL
|
||||
| CODE
|
||||
| PREC'''
|
||||
p[0] = p[1]
|
||||
|
||||
|
||||
def p_empty(p):
|
||||
'''empty : '''
|
||||
|
||||
|
||||
def p_error(p):
|
||||
pass
|
||||
|
||||
yacc.yacc(debug=0)
|
||||
|
||||
def print_code(code,indent):
|
||||
if not emit_code: return
|
||||
|
||||
def print_code(code, indent):
|
||||
if not emit_code:
|
||||
return
|
||||
codelines = code.splitlines()
|
||||
for c in codelines:
|
||||
print "%s# %s" % (" "*indent,c)
|
||||
|
||||
print("%s# %s" % (" " * indent, c))
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
#
|
||||
|
||||
import sys
|
||||
sys.path.insert(0,"../..")
|
||||
sys.path.insert(0, "../..")
|
||||
|
||||
import ylex
|
||||
import yparse
|
||||
@@ -29,25 +29,23 @@ import yparse
|
||||
from ply import *
|
||||
|
||||
if len(sys.argv) == 1:
|
||||
print "usage : yply.py [-nocode] inputfile"
|
||||
print("usage : yply.py [-nocode] inputfile")
|
||||
raise SystemExit
|
||||
|
||||
if len(sys.argv) == 3:
|
||||
if sys.argv[1] == '-nocode':
|
||||
yparse.emit_code = 0
|
||||
yparse.emit_code = 0
|
||||
else:
|
||||
print "Unknown option '%s'" % sys.argv[1]
|
||||
raise SystemExit
|
||||
print("Unknown option '%s'" % sys.argv[1])
|
||||
raise SystemExit
|
||||
filename = sys.argv[2]
|
||||
else:
|
||||
filename = sys.argv[1]
|
||||
|
||||
yacc.parse(open(filename).read())
|
||||
|
||||
print """
|
||||
print("""
|
||||
if __name__ == '__main__':
|
||||
from ply import *
|
||||
yacc.yacc()
|
||||
"""
|
||||
|
||||
|
||||
""")
|
||||
|
||||
23
ext/ply/ply.egg-info/PKG-INFO
Normal file
23
ext/ply/ply.egg-info/PKG-INFO
Normal file
@@ -0,0 +1,23 @@
|
||||
Metadata-Version: 1.1
|
||||
Name: ply
|
||||
Version: 3.11
|
||||
Summary: Python Lex & Yacc
|
||||
Home-page: http://www.dabeaz.com/ply/
|
||||
Author: David Beazley
|
||||
Author-email: dave@dabeaz.com
|
||||
License: BSD
|
||||
Description-Content-Type: UNKNOWN
|
||||
Description:
|
||||
PLY is yet another implementation of lex and yacc for Python. Some notable
|
||||
features include the fact that its implemented entirely in Python and it
|
||||
uses LALR(1) parsing which is efficient and well suited for larger grammars.
|
||||
|
||||
PLY provides most of the standard lex/yacc features including support for empty
|
||||
productions, precedence rules, error recovery, and support for ambiguous grammars.
|
||||
|
||||
PLY is extremely easy to use and provides very extensive error checking.
|
||||
It is compatible with both Python 2 and Python 3.
|
||||
|
||||
Platform: UNKNOWN
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
190
ext/ply/ply.egg-info/SOURCES.txt
Normal file
190
ext/ply/ply.egg-info/SOURCES.txt
Normal file
@@ -0,0 +1,190 @@
|
||||
ANNOUNCE
|
||||
CHANGES
|
||||
MANIFEST.in
|
||||
README.md
|
||||
TODO
|
||||
setup.cfg
|
||||
setup.py
|
||||
doc/internal.html
|
||||
doc/makedoc.py
|
||||
doc/ply.html
|
||||
example/README
|
||||
example/cleanup.sh
|
||||
example/BASIC/README
|
||||
example/BASIC/basic.py
|
||||
example/BASIC/basiclex.py
|
||||
example/BASIC/basiclog.py
|
||||
example/BASIC/basinterp.py
|
||||
example/BASIC/basparse.py
|
||||
example/BASIC/dim.bas
|
||||
example/BASIC/func.bas
|
||||
example/BASIC/gcd.bas
|
||||
example/BASIC/gosub.bas
|
||||
example/BASIC/hello.bas
|
||||
example/BASIC/linear.bas
|
||||
example/BASIC/maxsin.bas
|
||||
example/BASIC/powers.bas
|
||||
example/BASIC/rand.bas
|
||||
example/BASIC/sales.bas
|
||||
example/BASIC/sears.bas
|
||||
example/BASIC/sqrt1.bas
|
||||
example/BASIC/sqrt2.bas
|
||||
example/GardenSnake/GardenSnake.py
|
||||
example/GardenSnake/README
|
||||
example/ansic/README
|
||||
example/ansic/clex.py
|
||||
example/ansic/cparse.py
|
||||
example/calc/calc.py
|
||||
example/calcdebug/calc.py
|
||||
example/calceof/calc.py
|
||||
example/classcalc/calc.py
|
||||
example/closurecalc/calc.py
|
||||
example/hedit/hedit.py
|
||||
example/newclasscalc/calc.py
|
||||
example/optcalc/README
|
||||
example/optcalc/calc.py
|
||||
example/unicalc/calc.py
|
||||
example/yply/README
|
||||
example/yply/ylex.py
|
||||
example/yply/yparse.py
|
||||
example/yply/yply.py
|
||||
ply/__init__.py
|
||||
ply/cpp.py
|
||||
ply/ctokens.py
|
||||
ply/lex.py
|
||||
ply/yacc.py
|
||||
ply/ygen.py
|
||||
ply.egg-info/PKG-INFO
|
||||
ply.egg-info/SOURCES.txt
|
||||
ply.egg-info/dependency_links.txt
|
||||
ply.egg-info/top_level.txt
|
||||
test/README
|
||||
test/calclex.py
|
||||
test/cleanup.sh
|
||||
test/lex_closure.py
|
||||
test/lex_doc1.py
|
||||
test/lex_dup1.py
|
||||
test/lex_dup2.py
|
||||
test/lex_dup3.py
|
||||
test/lex_empty.py
|
||||
test/lex_error1.py
|
||||
test/lex_error2.py
|
||||
test/lex_error3.py
|
||||
test/lex_error4.py
|
||||
test/lex_hedit.py
|
||||
test/lex_ignore.py
|
||||
test/lex_ignore2.py
|
||||
test/lex_literal1.py
|
||||
test/lex_literal2.py
|
||||
test/lex_literal3.py
|
||||
test/lex_many_tokens.py
|
||||
test/lex_module.py
|
||||
test/lex_module_import.py
|
||||
test/lex_object.py
|
||||
test/lex_opt_alias.py
|
||||
test/lex_optimize.py
|
||||
test/lex_optimize2.py
|
||||
test/lex_optimize3.py
|
||||
test/lex_optimize4.py
|
||||
test/lex_re1.py
|
||||
test/lex_re2.py
|
||||
test/lex_re3.py
|
||||
test/lex_rule1.py
|
||||
test/lex_rule2.py
|
||||
test/lex_rule3.py
|
||||
test/lex_state1.py
|
||||
test/lex_state2.py
|
||||
test/lex_state3.py
|
||||
test/lex_state4.py
|
||||
test/lex_state5.py
|
||||
test/lex_state_noerror.py
|
||||
test/lex_state_norule.py
|
||||
test/lex_state_try.py
|
||||
test/lex_token1.py
|
||||
test/lex_token2.py
|
||||
test/lex_token3.py
|
||||
test/lex_token4.py
|
||||
test/lex_token5.py
|
||||
test/lex_token_dup.py
|
||||
test/parser.out
|
||||
test/testcpp.py
|
||||
test/testlex.py
|
||||
test/testyacc.py
|
||||
test/yacc_badargs.py
|
||||
test/yacc_badid.py
|
||||
test/yacc_badprec.py
|
||||
test/yacc_badprec2.py
|
||||
test/yacc_badprec3.py
|
||||
test/yacc_badrule.py
|
||||
test/yacc_badtok.py
|
||||
test/yacc_dup.py
|
||||
test/yacc_error1.py
|
||||
test/yacc_error2.py
|
||||
test/yacc_error3.py
|
||||
test/yacc_error4.py
|
||||
test/yacc_error5.py
|
||||
test/yacc_error6.py
|
||||
test/yacc_error7.py
|
||||
test/yacc_inf.py
|
||||
test/yacc_literal.py
|
||||
test/yacc_misplaced.py
|
||||
test/yacc_missing1.py
|
||||
test/yacc_nested.py
|
||||
test/yacc_nodoc.py
|
||||
test/yacc_noerror.py
|
||||
test/yacc_nop.py
|
||||
test/yacc_notfunc.py
|
||||
test/yacc_notok.py
|
||||
test/yacc_prec1.py
|
||||
test/yacc_rr.py
|
||||
test/yacc_rr_unused.py
|
||||
test/yacc_simple.py
|
||||
test/yacc_sr.py
|
||||
test/yacc_term1.py
|
||||
test/yacc_unicode_literals.py
|
||||
test/yacc_unused.py
|
||||
test/yacc_unused_rule.py
|
||||
test/yacc_uprec.py
|
||||
test/yacc_uprec2.py
|
||||
test/pkg_test1/__init__.py
|
||||
test/pkg_test1/parsing/__init__.py
|
||||
test/pkg_test1/parsing/calclex.py
|
||||
test/pkg_test1/parsing/calcparse.py
|
||||
test/pkg_test1/parsing/lextab.py
|
||||
test/pkg_test1/parsing/parser.out
|
||||
test/pkg_test1/parsing/parsetab.py
|
||||
test/pkg_test2/__init__.py
|
||||
test/pkg_test2/parsing/__init__.py
|
||||
test/pkg_test2/parsing/calclex.py
|
||||
test/pkg_test2/parsing/calclextab.py
|
||||
test/pkg_test2/parsing/calcparse.py
|
||||
test/pkg_test2/parsing/calcparsetab.py
|
||||
test/pkg_test2/parsing/parser.out
|
||||
test/pkg_test3/__init__.py
|
||||
test/pkg_test3/generated/__init__.py
|
||||
test/pkg_test3/generated/lextab.py
|
||||
test/pkg_test3/generated/parser.out
|
||||
test/pkg_test3/generated/parsetab.py
|
||||
test/pkg_test3/parsing/__init__.py
|
||||
test/pkg_test3/parsing/calclex.py
|
||||
test/pkg_test3/parsing/calcparse.py
|
||||
test/pkg_test4/__init__.py
|
||||
test/pkg_test4/parsing/__init__.py
|
||||
test/pkg_test4/parsing/calclex.py
|
||||
test/pkg_test4/parsing/calcparse.py
|
||||
test/pkg_test5/__init__.py
|
||||
test/pkg_test5/parsing/__init__.py
|
||||
test/pkg_test5/parsing/calclex.py
|
||||
test/pkg_test5/parsing/calcparse.py
|
||||
test/pkg_test5/parsing/lextab.py
|
||||
test/pkg_test5/parsing/parser.out
|
||||
test/pkg_test5/parsing/parsetab.py
|
||||
test/pkg_test6/__init__.py
|
||||
test/pkg_test6/parsing/__init__.py
|
||||
test/pkg_test6/parsing/calclex.py
|
||||
test/pkg_test6/parsing/calcparse.py
|
||||
test/pkg_test6/parsing/expression.py
|
||||
test/pkg_test6/parsing/lextab.py
|
||||
test/pkg_test6/parsing/parser.out
|
||||
test/pkg_test6/parsing/parsetab.py
|
||||
test/pkg_test6/parsing/statement.py
|
||||
1
ext/ply/ply.egg-info/dependency_links.txt
Normal file
1
ext/ply/ply.egg-info/dependency_links.txt
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
1
ext/ply/ply.egg-info/top_level.txt
Normal file
1
ext/ply/ply.egg-info/top_level.txt
Normal file
@@ -0,0 +1 @@
|
||||
ply
|
||||
@@ -1,4 +1,5 @@
|
||||
# PLY package
|
||||
# Author: David Beazley (dave@dabeaz.com)
|
||||
|
||||
__version__ = '3.11'
|
||||
__all__ = ['lex','yacc']
|
||||
|
||||
@@ -5,17 +5,26 @@
|
||||
# Copyright (C) 2007
|
||||
# All rights reserved
|
||||
#
|
||||
# This module implements an ANSI-C style lexical preprocessor for PLY.
|
||||
# This module implements an ANSI-C style lexical preprocessor for PLY.
|
||||
# -----------------------------------------------------------------------------
|
||||
from __future__ import generators
|
||||
|
||||
import sys
|
||||
|
||||
# Some Python 3 compatibility shims
|
||||
if sys.version_info.major < 3:
|
||||
STRING_TYPES = (str, unicode)
|
||||
else:
|
||||
STRING_TYPES = str
|
||||
xrange = range
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Default preprocessor lexer definitions. These tokens are enough to get
|
||||
# a basic preprocessor working. Other modules may import these if they want
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
tokens = (
|
||||
'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT', 'CPP_POUND','CPP_DPOUND'
|
||||
'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT1', 'CPP_COMMENT2', 'CPP_POUND','CPP_DPOUND'
|
||||
)
|
||||
|
||||
literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\""
|
||||
@@ -34,7 +43,7 @@ t_CPP_ID = r'[A-Za-z_][\w_]*'
|
||||
|
||||
# Integer literal
|
||||
def CPP_INTEGER(t):
|
||||
r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU]|[lL]|[uU][lL]|[lL][uU])?)'
|
||||
r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU][lL]|[lL][uU]|[uU]|[lL])?)'
|
||||
return t
|
||||
|
||||
t_CPP_INTEGER = CPP_INTEGER
|
||||
@@ -55,11 +64,21 @@ def t_CPP_CHAR(t):
|
||||
return t
|
||||
|
||||
# Comment
|
||||
def t_CPP_COMMENT(t):
|
||||
r'(/\*(.|\n)*?\*/)|(//.*?\n)'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
def t_CPP_COMMENT1(t):
|
||||
r'(/\*(.|\n)*?\*/)'
|
||||
ncr = t.value.count("\n")
|
||||
t.lexer.lineno += ncr
|
||||
# replace with one space or a number of '\n'
|
||||
t.type = 'CPP_WS'; t.value = '\n' * ncr if ncr else ' '
|
||||
return t
|
||||
|
||||
|
||||
# Line comment
|
||||
def t_CPP_COMMENT2(t):
|
||||
r'(//.*?(\n|$))'
|
||||
# replace with '/n'
|
||||
t.type = 'CPP_WS'; t.value = '\n'
|
||||
return t
|
||||
|
||||
def t_error(t):
|
||||
t.type = t.value[0]
|
||||
t.value = t.value[0]
|
||||
@@ -73,8 +92,8 @@ import os.path
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# trigraph()
|
||||
#
|
||||
# Given an input string, this function replaces all trigraph sequences.
|
||||
#
|
||||
# Given an input string, this function replaces all trigraph sequences.
|
||||
# The following mapping is used:
|
||||
#
|
||||
# ??= #
|
||||
@@ -176,7 +195,7 @@ class Preprocessor(object):
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def error(self,file,line,msg):
|
||||
print >>sys.stderr,"%s:%d %s" % (file,line,msg)
|
||||
print("%s:%d %s" % (file,line,msg))
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# lexprobe()
|
||||
@@ -193,7 +212,7 @@ class Preprocessor(object):
|
||||
self.lexer.input("identifier")
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != "identifier":
|
||||
print "Couldn't determine identifier type"
|
||||
print("Couldn't determine identifier type")
|
||||
else:
|
||||
self.t_ID = tok.type
|
||||
|
||||
@@ -201,7 +220,7 @@ class Preprocessor(object):
|
||||
self.lexer.input("12345")
|
||||
tok = self.lexer.token()
|
||||
if not tok or int(tok.value) != 12345:
|
||||
print "Couldn't determine integer type"
|
||||
print("Couldn't determine integer type")
|
||||
else:
|
||||
self.t_INTEGER = tok.type
|
||||
self.t_INTEGER_TYPE = type(tok.value)
|
||||
@@ -210,7 +229,7 @@ class Preprocessor(object):
|
||||
self.lexer.input("\"filename\"")
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != "\"filename\"":
|
||||
print "Couldn't determine string type"
|
||||
print("Couldn't determine string type")
|
||||
else:
|
||||
self.t_STRING = tok.type
|
||||
|
||||
@@ -227,7 +246,7 @@ class Preprocessor(object):
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != "\n":
|
||||
self.t_NEWLINE = None
|
||||
print "Couldn't determine token for newlines"
|
||||
print("Couldn't determine token for newlines")
|
||||
else:
|
||||
self.t_NEWLINE = tok.type
|
||||
|
||||
@@ -239,12 +258,12 @@ class Preprocessor(object):
|
||||
self.lexer.input(c)
|
||||
tok = self.lexer.token()
|
||||
if not tok or tok.value != c:
|
||||
print "Unable to lex '%s' required for preprocessor" % c
|
||||
print("Unable to lex '%s' required for preprocessor" % c)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# add_path()
|
||||
#
|
||||
# Adds a search path to the preprocessor.
|
||||
# Adds a search path to the preprocessor.
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def add_path(self,path):
|
||||
@@ -288,7 +307,7 @@ class Preprocessor(object):
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# tokenstrip()
|
||||
#
|
||||
#
|
||||
# Remove leading/trailing whitespace tokens from a token list
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
@@ -314,7 +333,7 @@ class Preprocessor(object):
|
||||
# argument. Each argument is represented by a list of tokens.
|
||||
#
|
||||
# When collecting arguments, leading and trailing whitespace is removed
|
||||
# from each argument.
|
||||
# from each argument.
|
||||
#
|
||||
# This function properly handles nested parenthesis and commas---these do not
|
||||
# define new arguments.
|
||||
@@ -326,7 +345,7 @@ class Preprocessor(object):
|
||||
current_arg = []
|
||||
nesting = 1
|
||||
tokenlen = len(tokenlist)
|
||||
|
||||
|
||||
# Search for the opening '('.
|
||||
i = 0
|
||||
while (i < tokenlen) and (tokenlist[i].type in self.t_WS):
|
||||
@@ -360,7 +379,7 @@ class Preprocessor(object):
|
||||
else:
|
||||
current_arg.append(t)
|
||||
i += 1
|
||||
|
||||
|
||||
# Missing end argument
|
||||
self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments")
|
||||
return 0, [],[]
|
||||
@@ -372,9 +391,9 @@ class Preprocessor(object):
|
||||
# This is used to speed up macro expansion later on---we'll know
|
||||
# right away where to apply patches to the value to form the expansion
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
|
||||
def macro_prescan(self,macro):
|
||||
macro.patch = [] # Standard macro arguments
|
||||
macro.patch = [] # Standard macro arguments
|
||||
macro.str_patch = [] # String conversion expansion
|
||||
macro.var_comma_patch = [] # Variadic macro comma patch
|
||||
i = 0
|
||||
@@ -392,10 +411,11 @@ class Preprocessor(object):
|
||||
elif (i > 0 and macro.value[i-1].value == '##'):
|
||||
macro.patch.append(('c',argnum,i-1))
|
||||
del macro.value[i-1]
|
||||
i -= 1
|
||||
continue
|
||||
elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'):
|
||||
macro.patch.append(('c',argnum,i))
|
||||
i += 1
|
||||
del macro.value[i + 1]
|
||||
continue
|
||||
# Standard expansion
|
||||
else:
|
||||
@@ -421,7 +441,7 @@ class Preprocessor(object):
|
||||
rep = [copy.copy(_x) for _x in macro.value]
|
||||
|
||||
# Make string expansion patches. These do not alter the length of the replacement sequence
|
||||
|
||||
|
||||
str_expansion = {}
|
||||
for argnum, i in macro.str_patch:
|
||||
if argnum not in str_expansion:
|
||||
@@ -439,7 +459,7 @@ class Preprocessor(object):
|
||||
# Make all other patches. The order of these matters. It is assumed that the patch list
|
||||
# has been sorted in reverse order of patch location since replacements will cause the
|
||||
# size of the replacement sequence to expand from the patch point.
|
||||
|
||||
|
||||
expanded = { }
|
||||
for ptype, argnum, i in macro.patch:
|
||||
# Concatenation. Argument is left unexpanded
|
||||
@@ -476,7 +496,7 @@ class Preprocessor(object):
|
||||
if t.value in self.macros and t.value not in expanded:
|
||||
# Yes, we found a macro match
|
||||
expanded[t.value] = True
|
||||
|
||||
|
||||
m = self.macros[t.value]
|
||||
if not m.arglist:
|
||||
# A simple macro
|
||||
@@ -490,7 +510,7 @@ class Preprocessor(object):
|
||||
j = i + 1
|
||||
while j < len(tokens) and tokens[j].type in self.t_WS:
|
||||
j += 1
|
||||
if tokens[j].value == '(':
|
||||
if j < len(tokens) and tokens[j].value == '(':
|
||||
tokcount,args,positions = self.collect_args(tokens[j:])
|
||||
if not m.variadic and len(args) != len(m.arglist):
|
||||
self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist)))
|
||||
@@ -508,7 +528,7 @@ class Preprocessor(object):
|
||||
else:
|
||||
args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1]
|
||||
del args[len(m.arglist):]
|
||||
|
||||
|
||||
# Get macro replacement text
|
||||
rep = self.macro_expand_args(m,args)
|
||||
rep = self.expand_macros(rep,expanded)
|
||||
@@ -516,18 +536,24 @@ class Preprocessor(object):
|
||||
r.lineno = t.lineno
|
||||
tokens[i:j+tokcount] = rep
|
||||
i += len(rep)
|
||||
else:
|
||||
# This is not a macro. It is just a word which
|
||||
# equals to name of the macro. Hence, go to the
|
||||
# next token.
|
||||
i += 1
|
||||
|
||||
del expanded[t.value]
|
||||
continue
|
||||
elif t.value == '__LINE__':
|
||||
t.type = self.t_INTEGER
|
||||
t.value = self.t_INTEGER_TYPE(t.lineno)
|
||||
|
||||
|
||||
i += 1
|
||||
return tokens
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# ----------------------------------------------------------------------
|
||||
# evalexpr()
|
||||
#
|
||||
#
|
||||
# Evaluate an expression token sequence for the purposes of evaluating
|
||||
# integral expressions.
|
||||
# ----------------------------------------------------------------------
|
||||
@@ -574,14 +600,14 @@ class Preprocessor(object):
|
||||
tokens[i].value = str(tokens[i].value)
|
||||
while tokens[i].value[-1] not in "0123456789abcdefABCDEF":
|
||||
tokens[i].value = tokens[i].value[:-1]
|
||||
|
||||
|
||||
expr = "".join([str(x.value) for x in tokens])
|
||||
expr = expr.replace("&&"," and ")
|
||||
expr = expr.replace("||"," or ")
|
||||
expr = expr.replace("!"," not ")
|
||||
try:
|
||||
result = eval(expr)
|
||||
except StandardError:
|
||||
except Exception:
|
||||
self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression")
|
||||
result = 0
|
||||
return result
|
||||
@@ -599,7 +625,7 @@ class Preprocessor(object):
|
||||
|
||||
if not source:
|
||||
source = ""
|
||||
|
||||
|
||||
self.define("__FILE__ \"%s\"" % source)
|
||||
|
||||
self.source = source
|
||||
@@ -614,10 +640,11 @@ class Preprocessor(object):
|
||||
if tok.value == '#':
|
||||
# Preprocessor directive
|
||||
|
||||
# insert necessary whitespace instead of eaten tokens
|
||||
for tok in x:
|
||||
if tok in self.t_WS and '\n' in tok.value:
|
||||
if tok.type in self.t_WS and '\n' in tok.value:
|
||||
chunk.append(tok)
|
||||
|
||||
|
||||
dirtokens = self.tokenstrip(x[i+1:])
|
||||
if dirtokens:
|
||||
name = dirtokens[0].value
|
||||
@@ -625,7 +652,7 @@ class Preprocessor(object):
|
||||
else:
|
||||
name = ""
|
||||
args = []
|
||||
|
||||
|
||||
if name == 'define':
|
||||
if enable:
|
||||
for tok in self.expand_macros(chunk):
|
||||
@@ -685,7 +712,7 @@ class Preprocessor(object):
|
||||
iftrigger = True
|
||||
else:
|
||||
self.error(self.source,dirtokens[0].lineno,"Misplaced #elif")
|
||||
|
||||
|
||||
elif name == 'else':
|
||||
if ifstack:
|
||||
if ifstack[-1][0]:
|
||||
@@ -737,7 +764,7 @@ class Preprocessor(object):
|
||||
break
|
||||
i += 1
|
||||
else:
|
||||
print "Malformed #include <...>"
|
||||
print("Malformed #include <...>")
|
||||
return
|
||||
filename = "".join([x.value for x in tokens[1:i]])
|
||||
path = self.path + [""] + self.temp_path
|
||||
@@ -745,7 +772,7 @@ class Preprocessor(object):
|
||||
filename = tokens[0].value[1:-1]
|
||||
path = self.temp_path + [""] + self.path
|
||||
else:
|
||||
print "Malformed #include statement"
|
||||
print("Malformed #include statement")
|
||||
return
|
||||
for p in path:
|
||||
iname = os.path.join(p,filename)
|
||||
@@ -759,10 +786,10 @@ class Preprocessor(object):
|
||||
if dname:
|
||||
del self.temp_path[0]
|
||||
break
|
||||
except IOError,e:
|
||||
except IOError:
|
||||
pass
|
||||
else:
|
||||
print "Couldn't find '%s'" % filename
|
||||
print("Couldn't find '%s'" % filename)
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# define()
|
||||
@@ -771,7 +798,7 @@ class Preprocessor(object):
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
def define(self,tokens):
|
||||
if isinstance(tokens,(str,unicode)):
|
||||
if isinstance(tokens,STRING_TYPES):
|
||||
tokens = self.tokenize(tokens)
|
||||
|
||||
linetok = tokens
|
||||
@@ -794,7 +821,7 @@ class Preprocessor(object):
|
||||
variadic = False
|
||||
for a in args:
|
||||
if variadic:
|
||||
print "No more arguments may follow a variadic argument"
|
||||
print("No more arguments may follow a variadic argument")
|
||||
break
|
||||
astr = "".join([str(_i.value) for _i in a])
|
||||
if astr == "...":
|
||||
@@ -813,7 +840,7 @@ class Preprocessor(object):
|
||||
a[0].value = a[0].value[:-3]
|
||||
continue
|
||||
if len(a) > 1 or a[0].type != self.t_ID:
|
||||
print "Invalid macro argument"
|
||||
print("Invalid macro argument")
|
||||
break
|
||||
else:
|
||||
mvalue = self.tokenstrip(linetok[1+tokcount:])
|
||||
@@ -830,9 +857,9 @@ class Preprocessor(object):
|
||||
self.macro_prescan(m)
|
||||
self.macros[name.value] = m
|
||||
else:
|
||||
print "Bad macro definition"
|
||||
print("Bad macro definition")
|
||||
except LookupError:
|
||||
print "Bad macro definition"
|
||||
print("Bad macro definition")
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# undef()
|
||||
@@ -855,7 +882,7 @@ class Preprocessor(object):
|
||||
def parse(self,input,source=None,ignore={}):
|
||||
self.ignore = ignore
|
||||
self.parser = self.parsegen(input,source)
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# token()
|
||||
#
|
||||
@@ -864,7 +891,7 @@ class Preprocessor(object):
|
||||
def token(self):
|
||||
try:
|
||||
while True:
|
||||
tok = self.parser.next()
|
||||
tok = next(self.parser)
|
||||
if tok.type not in self.ignore: return tok
|
||||
except StopIteration:
|
||||
self.parser = None
|
||||
@@ -884,15 +911,4 @@ if __name__ == '__main__':
|
||||
while True:
|
||||
tok = p.token()
|
||||
if not tok: break
|
||||
print p.source, tok
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
print(p.source, tok)
|
||||
|
||||
@@ -9,27 +9,27 @@
|
||||
|
||||
tokens = [
|
||||
# Literals (identifier, integer constant, float constant, string constant, char const)
|
||||
'ID', 'TYPEID', 'ICONST', 'FCONST', 'SCONST', 'CCONST',
|
||||
'ID', 'TYPEID', 'INTEGER', 'FLOAT', 'STRING', 'CHARACTER',
|
||||
|
||||
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
|
||||
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MOD',
|
||||
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MODULO',
|
||||
'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
|
||||
'LOR', 'LAND', 'LNOT',
|
||||
'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
|
||||
|
||||
|
||||
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
|
||||
'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
|
||||
'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
|
||||
|
||||
# Increment/decrement (++,--)
|
||||
'PLUSPLUS', 'MINUSMINUS',
|
||||
'INCREMENT', 'DECREMENT',
|
||||
|
||||
# Structure dereference (->)
|
||||
'ARROW',
|
||||
|
||||
# Ternary operator (?)
|
||||
'TERNARY',
|
||||
|
||||
|
||||
# Delimeters ( ) [ ] { } , . ; :
|
||||
'LPAREN', 'RPAREN',
|
||||
'LBRACKET', 'RBRACKET',
|
||||
@@ -39,7 +39,7 @@ tokens = [
|
||||
# Ellipsis (...)
|
||||
'ELLIPSIS',
|
||||
]
|
||||
|
||||
|
||||
# Operators
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
@@ -74,7 +74,7 @@ t_LSHIFTEQUAL = r'<<='
|
||||
t_RSHIFTEQUAL = r'>>='
|
||||
t_ANDEQUAL = r'&='
|
||||
t_OREQUAL = r'\|='
|
||||
t_XOREQUAL = r'^='
|
||||
t_XOREQUAL = r'\^='
|
||||
|
||||
# Increment/decrement
|
||||
t_INCREMENT = r'\+\+'
|
||||
@@ -125,9 +125,3 @@ def t_CPPCOMMENT(t):
|
||||
r'//.*\n'
|
||||
t.lexer.lineno += 1
|
||||
return t
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
2244
ext/ply/ply/yacc.py
2244
ext/ply/ply/yacc.py
File diff suppressed because it is too large
Load Diff
69
ext/ply/ply/ygen.py
Normal file
69
ext/ply/ply/ygen.py
Normal file
@@ -0,0 +1,69 @@
|
||||
# ply: ygen.py
|
||||
#
|
||||
# This is a support program that auto-generates different versions of the YACC parsing
|
||||
# function with different features removed for the purposes of performance.
|
||||
#
|
||||
# Users should edit the method LRParser.parsedebug() in yacc.py. The source code
|
||||
# for that method is then used to create the other methods. See the comments in
|
||||
# yacc.py for further details.
|
||||
|
||||
import os.path
|
||||
import shutil
|
||||
|
||||
def get_source_range(lines, tag):
|
||||
srclines = enumerate(lines)
|
||||
start_tag = '#--! %s-start' % tag
|
||||
end_tag = '#--! %s-end' % tag
|
||||
|
||||
for start_index, line in srclines:
|
||||
if line.strip().startswith(start_tag):
|
||||
break
|
||||
|
||||
for end_index, line in srclines:
|
||||
if line.strip().endswith(end_tag):
|
||||
break
|
||||
|
||||
return (start_index + 1, end_index)
|
||||
|
||||
def filter_section(lines, tag):
|
||||
filtered_lines = []
|
||||
include = True
|
||||
tag_text = '#--! %s' % tag
|
||||
for line in lines:
|
||||
if line.strip().startswith(tag_text):
|
||||
include = not include
|
||||
elif include:
|
||||
filtered_lines.append(line)
|
||||
return filtered_lines
|
||||
|
||||
def main():
|
||||
dirname = os.path.dirname(__file__)
|
||||
shutil.copy2(os.path.join(dirname, 'yacc.py'), os.path.join(dirname, 'yacc.py.bak'))
|
||||
with open(os.path.join(dirname, 'yacc.py'), 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
parse_start, parse_end = get_source_range(lines, 'parsedebug')
|
||||
parseopt_start, parseopt_end = get_source_range(lines, 'parseopt')
|
||||
parseopt_notrack_start, parseopt_notrack_end = get_source_range(lines, 'parseopt-notrack')
|
||||
|
||||
# Get the original source
|
||||
orig_lines = lines[parse_start:parse_end]
|
||||
|
||||
# Filter the DEBUG sections out
|
||||
parseopt_lines = filter_section(orig_lines, 'DEBUG')
|
||||
|
||||
# Filter the TRACKING sections out
|
||||
parseopt_notrack_lines = filter_section(parseopt_lines, 'TRACKING')
|
||||
|
||||
# Replace the parser source sections with updated versions
|
||||
lines[parseopt_notrack_start:parseopt_notrack_end] = parseopt_notrack_lines
|
||||
lines[parseopt_start:parseopt_end] = parseopt_lines
|
||||
|
||||
lines = [line.rstrip()+'\n' for line in lines]
|
||||
with open(os.path.join(dirname, 'yacc.py'), 'w') as f:
|
||||
f.writelines(lines)
|
||||
|
||||
print('Updated yacc.py')
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
10
ext/ply/setup.cfg
Normal file
10
ext/ply/setup.cfg
Normal file
@@ -0,0 +1,10 @@
|
||||
[bdist_wheel]
|
||||
universal = 1
|
||||
|
||||
[metadata]
|
||||
description-file = README.md
|
||||
|
||||
[egg_info]
|
||||
tag_build =
|
||||
tag_date = 0
|
||||
|
||||
@@ -14,13 +14,18 @@ PLY provides most of the standard lex/yacc features including support for empty
|
||||
productions, precedence rules, error recovery, and support for ambiguous grammars.
|
||||
|
||||
PLY is extremely easy to use and provides very extensive error checking.
|
||||
It is compatible with both Python 2 and Python 3.
|
||||
""",
|
||||
license="""BSD""",
|
||||
version = "3.2",
|
||||
version = "3.11",
|
||||
author = "David Beazley",
|
||||
author_email = "dave@dabeaz.com",
|
||||
maintainer = "David Beazley",
|
||||
maintainer_email = "dave@dabeaz.com",
|
||||
url = "http://www.dabeaz.com/ply/",
|
||||
packages = ['ply'],
|
||||
classifiers = [
|
||||
'Programming Language :: Python :: 3',
|
||||
'Programming Language :: Python :: 2',
|
||||
]
|
||||
)
|
||||
|
||||
@@ -1,11 +1,8 @@
|
||||
This directory mostly contains tests for various types of error
|
||||
conditions. To run:
|
||||
|
||||
$ python testlex.py .
|
||||
$ python testyacc.py .
|
||||
|
||||
The tests can also be run using the Python unittest module.
|
||||
|
||||
$ python rununit.py
|
||||
$ python testlex.py
|
||||
$ python testyacc.py
|
||||
$ python testcpp.py
|
||||
|
||||
The script 'cleanup.sh' cleans up this directory to its original state.
|
||||
|
||||
@@ -36,14 +36,14 @@ t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lineno += t.value.count("\n")
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
lex.lex()
|
||||
lexer = lex.lex()
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/sh
|
||||
|
||||
rm -f *~ *.pyc *.pyo *.dif *.out
|
||||
rm -rf *~ *.pyc *.pyo *.dif *.out __pycache__
|
||||
|
||||
|
||||
26
ext/ply/test/lex_literal3.py
Normal file
26
ext/ply/test/lex_literal3.py
Normal file
@@ -0,0 +1,26 @@
|
||||
# lex_literal3.py
|
||||
#
|
||||
# An empty literal specification given as a list
|
||||
# Issue 8 : Literals empty list causes IndexError
|
||||
|
||||
import sys
|
||||
if ".." not in sys.path: sys.path.insert(0,"..")
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = [
|
||||
"NUMBER",
|
||||
]
|
||||
|
||||
literals = []
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
return t
|
||||
|
||||
def t_error(t):
|
||||
pass
|
||||
|
||||
lex.lex()
|
||||
|
||||
|
||||
@@ -45,7 +45,7 @@ def t_error(t):
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
lex.lex(optimize=1,lextab="lexdir.sub.calctab",outputdir="lexdir/sub")
|
||||
lex.lex(optimize=1,lextab="lexdir.sub.calctab" ,outputdir="lexdir/sub")
|
||||
lex.runmain(data="3+4")
|
||||
|
||||
|
||||
|
||||
26
ext/ply/test/lex_optimize4.py
Normal file
26
ext/ply/test/lex_optimize4.py
Normal file
@@ -0,0 +1,26 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# lex_optimize4.py
|
||||
# -----------------------------------------------------------------------------
|
||||
import re
|
||||
import sys
|
||||
|
||||
if ".." not in sys.path: sys.path.insert(0,"..")
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = [
|
||||
"PLUS",
|
||||
"MINUS",
|
||||
"NUMBER",
|
||||
]
|
||||
|
||||
t_PLUS = r'\+?'
|
||||
t_MINUS = r'-'
|
||||
t_NUMBER = r'(\d+)'
|
||||
|
||||
def t_error(t):
|
||||
pass
|
||||
|
||||
|
||||
# Build the lexer
|
||||
lex.lex(optimize=True, lextab="opt4tab", reflags=re.UNICODE)
|
||||
lex.runmain(data="3+4")
|
||||
9
ext/ply/test/pkg_test1/__init__.py
Normal file
9
ext/ply/test/pkg_test1/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# Tests proper handling of lextab and parsetab files in package structures
|
||||
|
||||
# Here for testing purposes
|
||||
import sys
|
||||
if '..' not in sys.path:
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
from .parsing.calcparse import parser
|
||||
|
||||
0
ext/ply/test/pkg_test1/parsing/__init__.py
Normal file
0
ext/ply/test/pkg_test1/parsing/__init__.py
Normal file
47
ext/ply/test/pkg_test1/parsing/calclex.py
Normal file
47
ext/ply/test/pkg_test1/parsing/calclex.py
Normal file
@@ -0,0 +1,47 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calclex.py
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
try:
|
||||
t.value = int(t.value)
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
lexer = lex.lex(optimize=True)
|
||||
|
||||
|
||||
|
||||
66
ext/ply/test/pkg_test1/parsing/calcparse.py
Normal file
66
ext/ply/test/pkg_test1/parsing/calcparse.py
Normal file
@@ -0,0 +1,66 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_simple.py
|
||||
#
|
||||
# A simple, properly specifier grammar
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
from .calclex import tokens
|
||||
from ply import yacc
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
parser = yacc.yacc()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
10
ext/ply/test/pkg_test1/parsing/lextab.py
Normal file
10
ext/ply/test/pkg_test1/parsing/lextab.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# lextab.py. This file automatically created by PLY (version 3.11). Don't edit!
|
||||
_tabversion = '3.10'
|
||||
_lextokens = set(('DIVIDE', 'EQUALS', 'LPAREN', 'MINUS', 'NAME', 'NUMBER', 'PLUS', 'RPAREN', 'TIMES'))
|
||||
_lexreflags = 64
|
||||
_lexliterals = ''
|
||||
_lexstateinfo = {'INITIAL': 'inclusive'}
|
||||
_lexstatere = {'INITIAL': [('(?P<t_NUMBER>\\d+)|(?P<t_newline>\\n+)|(?P<t_NAME>[a-zA-Z_][a-zA-Z0-9_]*)|(?P<t_PLUS>\\+)|(?P<t_LPAREN>\\()|(?P<t_TIMES>\\*)|(?P<t_RPAREN>\\))|(?P<t_EQUALS>=)|(?P<t_DIVIDE>/)|(?P<t_MINUS>-)', [None, ('t_NUMBER', 'NUMBER'), ('t_newline', 'newline'), (None, 'NAME'), (None, 'PLUS'), (None, 'LPAREN'), (None, 'TIMES'), (None, 'RPAREN'), (None, 'EQUALS'), (None, 'DIVIDE'), (None, 'MINUS')])]}
|
||||
_lexstateignore = {'INITIAL': ' \t'}
|
||||
_lexstateerrorf = {'INITIAL': 't_error'}
|
||||
_lexstateeoff = {}
|
||||
9
ext/ply/test/pkg_test2/__init__.py
Normal file
9
ext/ply/test/pkg_test2/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# Tests proper handling of lextab and parsetab files in package structures
|
||||
|
||||
# Here for testing purposes
|
||||
import sys
|
||||
if '..' not in sys.path:
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
from .parsing.calcparse import parser
|
||||
|
||||
0
ext/ply/test/pkg_test2/parsing/__init__.py
Normal file
0
ext/ply/test/pkg_test2/parsing/__init__.py
Normal file
47
ext/ply/test/pkg_test2/parsing/calclex.py
Normal file
47
ext/ply/test/pkg_test2/parsing/calclex.py
Normal file
@@ -0,0 +1,47 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calclex.py
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
try:
|
||||
t.value = int(t.value)
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
lexer = lex.lex(optimize=True, lextab='calclextab')
|
||||
|
||||
|
||||
|
||||
10
ext/ply/test/pkg_test2/parsing/calclextab.py
Normal file
10
ext/ply/test/pkg_test2/parsing/calclextab.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# calclextab.py. This file automatically created by PLY (version 3.11). Don't edit!
|
||||
_tabversion = '3.10'
|
||||
_lextokens = set(('DIVIDE', 'EQUALS', 'LPAREN', 'MINUS', 'NAME', 'NUMBER', 'PLUS', 'RPAREN', 'TIMES'))
|
||||
_lexreflags = 64
|
||||
_lexliterals = ''
|
||||
_lexstateinfo = {'INITIAL': 'inclusive'}
|
||||
_lexstatere = {'INITIAL': [('(?P<t_NUMBER>\\d+)|(?P<t_newline>\\n+)|(?P<t_NAME>[a-zA-Z_][a-zA-Z0-9_]*)|(?P<t_PLUS>\\+)|(?P<t_LPAREN>\\()|(?P<t_TIMES>\\*)|(?P<t_RPAREN>\\))|(?P<t_EQUALS>=)|(?P<t_DIVIDE>/)|(?P<t_MINUS>-)', [None, ('t_NUMBER', 'NUMBER'), ('t_newline', 'newline'), (None, 'NAME'), (None, 'PLUS'), (None, 'LPAREN'), (None, 'TIMES'), (None, 'RPAREN'), (None, 'EQUALS'), (None, 'DIVIDE'), (None, 'MINUS')])]}
|
||||
_lexstateignore = {'INITIAL': ' \t'}
|
||||
_lexstateerrorf = {'INITIAL': 't_error'}
|
||||
_lexstateeoff = {}
|
||||
66
ext/ply/test/pkg_test2/parsing/calcparse.py
Normal file
66
ext/ply/test/pkg_test2/parsing/calcparse.py
Normal file
@@ -0,0 +1,66 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_simple.py
|
||||
#
|
||||
# A simple, properly specifier grammar
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
from .calclex import tokens
|
||||
from ply import yacc
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
parser = yacc.yacc(tabmodule='calcparsetab')
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
40
ext/ply/test/pkg_test2/parsing/calcparsetab.py
Normal file
40
ext/ply/test/pkg_test2/parsing/calcparsetab.py
Normal file
@@ -0,0 +1,40 @@
|
||||
|
||||
# calcparsetab.py
|
||||
# This file is automatically generated. Do not edit.
|
||||
# pylint: disable=W,C,R
|
||||
_tabversion = '3.10'
|
||||
|
||||
_lr_method = 'LALR'
|
||||
|
||||
_lr_signature = 'leftPLUSMINUSleftTIMESDIVIDErightUMINUSDIVIDE EQUALS LPAREN MINUS NAME NUMBER PLUS RPAREN TIMESstatement : NAME EQUALS expressionstatement : expressionexpression : expression PLUS expression\n | expression MINUS expression\n | expression TIMES expression\n | expression DIVIDE expressionexpression : MINUS expression %prec UMINUSexpression : LPAREN expression RPARENexpression : NUMBERexpression : NAME'
|
||||
|
||||
_lr_action_items = {'PLUS':([2,4,6,7,8,9,15,16,17,18,19,20,],[-9,-10,11,-10,-7,11,-8,11,-3,-4,-6,-5,]),'MINUS':([0,1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,],[1,1,-9,1,-10,12,-10,-7,12,1,1,1,1,1,-8,12,-3,-4,-6,-5,]),'EQUALS':([4,],[10,]),'NUMBER':([0,1,3,10,11,12,13,14,],[2,2,2,2,2,2,2,2,]),'LPAREN':([0,1,3,10,11,12,13,14,],[3,3,3,3,3,3,3,3,]),'NAME':([0,1,3,10,11,12,13,14,],[4,7,7,7,7,7,7,7,]),'TIMES':([2,4,6,7,8,9,15,16,17,18,19,20,],[-9,-10,14,-10,-7,14,-8,14,14,14,-6,-5,]),'$end':([2,4,5,6,7,8,15,16,17,18,19,20,],[-9,-10,0,-2,-10,-7,-8,-1,-3,-4,-6,-5,]),'RPAREN':([2,7,8,9,15,17,18,19,20,],[-9,-10,-7,15,-8,-3,-4,-6,-5,]),'DIVIDE':([2,4,6,7,8,9,15,16,17,18,19,20,],[-9,-10,13,-10,-7,13,-8,13,13,13,-6,-5,]),}
|
||||
|
||||
_lr_action = {}
|
||||
for _k, _v in _lr_action_items.items():
|
||||
for _x,_y in zip(_v[0],_v[1]):
|
||||
if not _x in _lr_action: _lr_action[_x] = {}
|
||||
_lr_action[_x][_k] = _y
|
||||
del _lr_action_items
|
||||
|
||||
_lr_goto_items = {'statement':([0,],[5,]),'expression':([0,1,3,10,11,12,13,14,],[6,8,9,16,17,18,19,20,]),}
|
||||
|
||||
_lr_goto = {}
|
||||
for _k, _v in _lr_goto_items.items():
|
||||
for _x, _y in zip(_v[0], _v[1]):
|
||||
if not _x in _lr_goto: _lr_goto[_x] = {}
|
||||
_lr_goto[_x][_k] = _y
|
||||
del _lr_goto_items
|
||||
_lr_productions = [
|
||||
("S' -> statement","S'",1,None,None,None),
|
||||
('statement -> NAME EQUALS expression','statement',3,'p_statement_assign','calcparse.py',21),
|
||||
('statement -> expression','statement',1,'p_statement_expr','calcparse.py',25),
|
||||
('expression -> expression PLUS expression','expression',3,'p_expression_binop','calcparse.py',29),
|
||||
('expression -> expression MINUS expression','expression',3,'p_expression_binop','calcparse.py',30),
|
||||
('expression -> expression TIMES expression','expression',3,'p_expression_binop','calcparse.py',31),
|
||||
('expression -> expression DIVIDE expression','expression',3,'p_expression_binop','calcparse.py',32),
|
||||
('expression -> MINUS expression','expression',2,'p_expression_uminus','calcparse.py',39),
|
||||
('expression -> LPAREN expression RPAREN','expression',3,'p_expression_group','calcparse.py',43),
|
||||
('expression -> NUMBER','expression',1,'p_expression_number','calcparse.py',47),
|
||||
('expression -> NAME','expression',1,'p_expression_name','calcparse.py',51),
|
||||
]
|
||||
9
ext/ply/test/pkg_test3/__init__.py
Normal file
9
ext/ply/test/pkg_test3/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# Tests proper handling of lextab and parsetab files in package structures
|
||||
|
||||
# Here for testing purposes
|
||||
import sys
|
||||
if '..' not in sys.path:
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
from .parsing.calcparse import parser
|
||||
|
||||
0
ext/ply/test/pkg_test3/generated/__init__.py
Normal file
0
ext/ply/test/pkg_test3/generated/__init__.py
Normal file
10
ext/ply/test/pkg_test3/generated/lextab.py
Normal file
10
ext/ply/test/pkg_test3/generated/lextab.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# lextab.py. This file automatically created by PLY (version 3.11). Don't edit!
|
||||
_tabversion = '3.10'
|
||||
_lextokens = set(('DIVIDE', 'EQUALS', 'LPAREN', 'MINUS', 'NAME', 'NUMBER', 'PLUS', 'RPAREN', 'TIMES'))
|
||||
_lexreflags = 64
|
||||
_lexliterals = ''
|
||||
_lexstateinfo = {'INITIAL': 'inclusive'}
|
||||
_lexstatere = {'INITIAL': [('(?P<t_NUMBER>\\d+)|(?P<t_newline>\\n+)|(?P<t_NAME>[a-zA-Z_][a-zA-Z0-9_]*)|(?P<t_PLUS>\\+)|(?P<t_LPAREN>\\()|(?P<t_TIMES>\\*)|(?P<t_RPAREN>\\))|(?P<t_EQUALS>=)|(?P<t_DIVIDE>/)|(?P<t_MINUS>-)', [None, ('t_NUMBER', 'NUMBER'), ('t_newline', 'newline'), (None, 'NAME'), (None, 'PLUS'), (None, 'LPAREN'), (None, 'TIMES'), (None, 'RPAREN'), (None, 'EQUALS'), (None, 'DIVIDE'), (None, 'MINUS')])]}
|
||||
_lexstateignore = {'INITIAL': ' \t'}
|
||||
_lexstateerrorf = {'INITIAL': 't_error'}
|
||||
_lexstateeoff = {}
|
||||
0
ext/ply/test/pkg_test3/parsing/__init__.py
Normal file
0
ext/ply/test/pkg_test3/parsing/__init__.py
Normal file
47
ext/ply/test/pkg_test3/parsing/calclex.py
Normal file
47
ext/ply/test/pkg_test3/parsing/calclex.py
Normal file
@@ -0,0 +1,47 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calclex.py
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
try:
|
||||
t.value = int(t.value)
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
lexer = lex.lex(optimize=True, lextab='pkg_test3.generated.lextab')
|
||||
|
||||
|
||||
|
||||
66
ext/ply/test/pkg_test3/parsing/calcparse.py
Normal file
66
ext/ply/test/pkg_test3/parsing/calcparse.py
Normal file
@@ -0,0 +1,66 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_simple.py
|
||||
#
|
||||
# A simple, properly specifier grammar
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
from .calclex import tokens
|
||||
from ply import yacc
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
parser = yacc.yacc(tabmodule='pkg_test3.generated.parsetab')
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
25
ext/ply/test/pkg_test4/__init__.py
Normal file
25
ext/ply/test/pkg_test4/__init__.py
Normal file
@@ -0,0 +1,25 @@
|
||||
# Tests proper handling of lextab and parsetab files in package structures
|
||||
# Check of warning messages when files aren't writable
|
||||
|
||||
# Here for testing purposes
|
||||
import sys
|
||||
if '..' not in sys.path:
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
import ply.lex
|
||||
import ply.yacc
|
||||
|
||||
def patched_open(filename, mode):
|
||||
if 'w' in mode:
|
||||
raise IOError("Permission denied %r" % filename)
|
||||
return open(filename, mode)
|
||||
|
||||
ply.lex.open = patched_open
|
||||
ply.yacc.open = patched_open
|
||||
try:
|
||||
from .parsing.calcparse import parser
|
||||
finally:
|
||||
del ply.lex.open
|
||||
del ply.yacc.open
|
||||
|
||||
|
||||
0
ext/ply/test/pkg_test4/parsing/__init__.py
Normal file
0
ext/ply/test/pkg_test4/parsing/__init__.py
Normal file
47
ext/ply/test/pkg_test4/parsing/calclex.py
Normal file
47
ext/ply/test/pkg_test4/parsing/calclex.py
Normal file
@@ -0,0 +1,47 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calclex.py
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
try:
|
||||
t.value = int(t.value)
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
lexer = lex.lex(optimize=True)
|
||||
|
||||
|
||||
|
||||
66
ext/ply/test/pkg_test4/parsing/calcparse.py
Normal file
66
ext/ply/test/pkg_test4/parsing/calcparse.py
Normal file
@@ -0,0 +1,66 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_simple.py
|
||||
#
|
||||
# A simple, properly specifier grammar
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
from .calclex import tokens
|
||||
from ply import yacc
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
parser = yacc.yacc()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
9
ext/ply/test/pkg_test5/__init__.py
Normal file
9
ext/ply/test/pkg_test5/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# Tests proper handling of lextab and parsetab files in package structures
|
||||
|
||||
# Here for testing purposes
|
||||
import sys
|
||||
if '..' not in sys.path:
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
from .parsing.calcparse import parser
|
||||
|
||||
0
ext/ply/test/pkg_test5/parsing/__init__.py
Normal file
0
ext/ply/test/pkg_test5/parsing/__init__.py
Normal file
48
ext/ply/test/pkg_test5/parsing/calclex.py
Normal file
48
ext/ply/test/pkg_test5/parsing/calclex.py
Normal file
@@ -0,0 +1,48 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calclex.py
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
try:
|
||||
t.value = int(t.value)
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
import os.path
|
||||
lexer = lex.lex(optimize=True, outputdir=os.path.dirname(__file__))
|
||||
|
||||
|
||||
|
||||
67
ext/ply/test/pkg_test5/parsing/calcparse.py
Normal file
67
ext/ply/test/pkg_test5/parsing/calcparse.py
Normal file
@@ -0,0 +1,67 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_simple.py
|
||||
#
|
||||
# A simple, properly specifier grammar
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
from .calclex import tokens
|
||||
from ply import yacc
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
import os.path
|
||||
parser = yacc.yacc(outputdir=os.path.dirname(__file__))
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
10
ext/ply/test/pkg_test5/parsing/lextab.py
Normal file
10
ext/ply/test/pkg_test5/parsing/lextab.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# lextab.py. This file automatically created by PLY (version 3.11). Don't edit!
|
||||
_tabversion = '3.10'
|
||||
_lextokens = set(('DIVIDE', 'EQUALS', 'LPAREN', 'MINUS', 'NAME', 'NUMBER', 'PLUS', 'RPAREN', 'TIMES'))
|
||||
_lexreflags = 64
|
||||
_lexliterals = ''
|
||||
_lexstateinfo = {'INITIAL': 'inclusive'}
|
||||
_lexstatere = {'INITIAL': [('(?P<t_NUMBER>\\d+)|(?P<t_newline>\\n+)|(?P<t_NAME>[a-zA-Z_][a-zA-Z0-9_]*)|(?P<t_LPAREN>\\()|(?P<t_PLUS>\\+)|(?P<t_TIMES>\\*)|(?P<t_RPAREN>\\))|(?P<t_EQUALS>=)|(?P<t_DIVIDE>/)|(?P<t_MINUS>-)', [None, ('t_NUMBER', 'NUMBER'), ('t_newline', 'newline'), (None, 'NAME'), (None, 'LPAREN'), (None, 'PLUS'), (None, 'TIMES'), (None, 'RPAREN'), (None, 'EQUALS'), (None, 'DIVIDE'), (None, 'MINUS')])]}
|
||||
_lexstateignore = {'INITIAL': ' \t'}
|
||||
_lexstateerrorf = {'INITIAL': 't_error'}
|
||||
_lexstateeoff = {}
|
||||
9
ext/ply/test/pkg_test6/__init__.py
Normal file
9
ext/ply/test/pkg_test6/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# Tests proper sorting of modules in yacc.ParserReflect.get_pfunctions
|
||||
|
||||
# Here for testing purposes
|
||||
import sys
|
||||
if '..' not in sys.path:
|
||||
sys.path.insert(0, '..')
|
||||
|
||||
from .parsing.calcparse import parser
|
||||
|
||||
0
ext/ply/test/pkg_test6/parsing/__init__.py
Normal file
0
ext/ply/test/pkg_test6/parsing/__init__.py
Normal file
48
ext/ply/test/pkg_test6/parsing/calclex.py
Normal file
48
ext/ply/test/pkg_test6/parsing/calclex.py
Normal file
@@ -0,0 +1,48 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# calclex.py
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
import ply.lex as lex
|
||||
|
||||
tokens = (
|
||||
'NAME','NUMBER',
|
||||
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
|
||||
'LPAREN','RPAREN',
|
||||
)
|
||||
|
||||
# Tokens
|
||||
|
||||
t_PLUS = r'\+'
|
||||
t_MINUS = r'-'
|
||||
t_TIMES = r'\*'
|
||||
t_DIVIDE = r'/'
|
||||
t_EQUALS = r'='
|
||||
t_LPAREN = r'\('
|
||||
t_RPAREN = r'\)'
|
||||
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
|
||||
|
||||
def t_NUMBER(t):
|
||||
r'\d+'
|
||||
try:
|
||||
t.value = int(t.value)
|
||||
except ValueError:
|
||||
print("Integer value too large %s" % t.value)
|
||||
t.value = 0
|
||||
return t
|
||||
|
||||
t_ignore = " \t"
|
||||
|
||||
def t_newline(t):
|
||||
r'\n+'
|
||||
t.lexer.lineno += t.value.count("\n")
|
||||
|
||||
def t_error(t):
|
||||
print("Illegal character '%s'" % t.value[0])
|
||||
t.lexer.skip(1)
|
||||
|
||||
# Build the lexer
|
||||
import os.path
|
||||
lexer = lex.lex(optimize=True, outputdir=os.path.dirname(__file__))
|
||||
|
||||
|
||||
|
||||
33
ext/ply/test/pkg_test6/parsing/calcparse.py
Normal file
33
ext/ply/test/pkg_test6/parsing/calcparse.py
Normal file
@@ -0,0 +1,33 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_simple.py
|
||||
#
|
||||
# A simple, properly specifier grammar
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
from .calclex import tokens
|
||||
from ply import yacc
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
from .statement import *
|
||||
|
||||
from .expression import *
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
import os.path
|
||||
parser = yacc.yacc(outputdir=os.path.dirname(__file__))
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
31
ext/ply/test/pkg_test6/parsing/expression.py
Normal file
31
ext/ply/test/pkg_test6/parsing/expression.py
Normal file
@@ -0,0 +1,31 @@
|
||||
# This file contains definitions of expression grammar
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
10
ext/ply/test/pkg_test6/parsing/lextab.py
Normal file
10
ext/ply/test/pkg_test6/parsing/lextab.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# lextab.py. This file automatically created by PLY (version 3.11). Don't edit!
|
||||
_tabversion = '3.10'
|
||||
_lextokens = set(('DIVIDE', 'EQUALS', 'LPAREN', 'MINUS', 'NAME', 'NUMBER', 'PLUS', 'RPAREN', 'TIMES'))
|
||||
_lexreflags = 64
|
||||
_lexliterals = ''
|
||||
_lexstateinfo = {'INITIAL': 'inclusive'}
|
||||
_lexstatere = {'INITIAL': [('(?P<t_NUMBER>\\d+)|(?P<t_newline>\\n+)|(?P<t_NAME>[a-zA-Z_][a-zA-Z0-9_]*)|(?P<t_LPAREN>\\()|(?P<t_PLUS>\\+)|(?P<t_TIMES>\\*)|(?P<t_RPAREN>\\))|(?P<t_EQUALS>=)|(?P<t_DIVIDE>/)|(?P<t_MINUS>-)', [None, ('t_NUMBER', 'NUMBER'), ('t_newline', 'newline'), (None, 'NAME'), (None, 'LPAREN'), (None, 'PLUS'), (None, 'TIMES'), (None, 'RPAREN'), (None, 'EQUALS'), (None, 'DIVIDE'), (None, 'MINUS')])]}
|
||||
_lexstateignore = {'INITIAL': ' \t'}
|
||||
_lexstateerrorf = {'INITIAL': 't_error'}
|
||||
_lexstateeoff = {}
|
||||
9
ext/ply/test/pkg_test6/parsing/statement.py
Normal file
9
ext/ply/test/pkg_test6/parsing/statement.py
Normal file
@@ -0,0 +1,9 @@
|
||||
# This file contains definitions of statement grammar
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
t[0] = t[1]
|
||||
101
ext/ply/test/testcpp.py
Normal file
101
ext/ply/test/testcpp.py
Normal file
@@ -0,0 +1,101 @@
|
||||
from unittest import TestCase, main
|
||||
|
||||
from multiprocessing import Process, Queue
|
||||
from six.moves.queue import Empty
|
||||
|
||||
import sys
|
||||
|
||||
if ".." not in sys.path:
|
||||
sys.path.insert(0, "..")
|
||||
|
||||
from ply.lex import lex
|
||||
from ply.cpp import *
|
||||
|
||||
|
||||
def preprocessing(in_, out_queue):
|
||||
out = None
|
||||
|
||||
try:
|
||||
p = Preprocessor(lex())
|
||||
p.parse(in_)
|
||||
tokens = [t.value for t in p.parser]
|
||||
out = "".join(tokens)
|
||||
finally:
|
||||
out_queue.put(out)
|
||||
|
||||
class CPPTests(TestCase):
|
||||
"Tests related to ANSI-C style lexical preprocessor."
|
||||
|
||||
def __test_preprocessing(self, in_, expected, time_limit = 1.0):
|
||||
out_queue = Queue()
|
||||
|
||||
preprocessor = Process(
|
||||
name = "PLY`s C preprocessor",
|
||||
target = preprocessing,
|
||||
args = (in_, out_queue)
|
||||
)
|
||||
|
||||
preprocessor.start()
|
||||
|
||||
try:
|
||||
out = out_queue.get(timeout = time_limit)
|
||||
except Empty:
|
||||
preprocessor.terminate()
|
||||
raise RuntimeError("Time limit exceeded!")
|
||||
else:
|
||||
self.assertMultiLineEqual(out, expected)
|
||||
|
||||
def test_concatenation(self):
|
||||
self.__test_preprocessing("""\
|
||||
#define a(x) x##_
|
||||
#define b(x) _##x
|
||||
#define c(x) _##x##_
|
||||
#define d(x,y) _##x##y##_
|
||||
|
||||
a(i)
|
||||
b(j)
|
||||
c(k)
|
||||
d(q,s)"""
|
||||
, """\
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
i_
|
||||
_j
|
||||
_k_
|
||||
_qs_"""
|
||||
)
|
||||
|
||||
def test_deadloop_macro(self):
|
||||
# If there is a word which equals to name of a parametrized macro, then
|
||||
# attempt to expand such word as a macro manages the parser to fall
|
||||
# into an infinite loop.
|
||||
|
||||
self.__test_preprocessing("""\
|
||||
#define a(x) x
|
||||
|
||||
a;"""
|
||||
, """\
|
||||
|
||||
|
||||
a;"""
|
||||
)
|
||||
|
||||
def test_index_error(self):
|
||||
# If there are no tokens after a word ("a") which equals to name of
|
||||
# a parameterized macro, then attempt to expand this word leads to
|
||||
# IndexError.
|
||||
|
||||
self.__test_preprocessing("""\
|
||||
#define a(x) x
|
||||
|
||||
a"""
|
||||
, """\
|
||||
|
||||
|
||||
a"""
|
||||
)
|
||||
|
||||
main()
|
||||
@@ -7,12 +7,57 @@ except ImportError:
|
||||
import io as StringIO
|
||||
|
||||
import sys
|
||||
import os
|
||||
import warnings
|
||||
import platform
|
||||
|
||||
sys.path.insert(0,"..")
|
||||
sys.tracebacklimit = 0
|
||||
|
||||
import ply.lex
|
||||
|
||||
def check_expected(result,expected):
|
||||
try:
|
||||
from importlib.util import cache_from_source
|
||||
except ImportError:
|
||||
# Python 2.7, but we don't care.
|
||||
cache_from_source = None
|
||||
|
||||
|
||||
def make_pymodule_path(filename, optimization=None):
|
||||
path = os.path.dirname(filename)
|
||||
file = os.path.basename(filename)
|
||||
mod, ext = os.path.splitext(file)
|
||||
|
||||
if sys.hexversion >= 0x3050000:
|
||||
fullpath = cache_from_source(filename, optimization=optimization)
|
||||
elif sys.hexversion >= 0x3040000:
|
||||
fullpath = cache_from_source(filename, ext=='.pyc')
|
||||
elif sys.hexversion >= 0x3020000:
|
||||
import imp
|
||||
modname = mod+"."+imp.get_tag()+ext
|
||||
fullpath = os.path.join(path,'__pycache__',modname)
|
||||
else:
|
||||
fullpath = filename
|
||||
return fullpath
|
||||
|
||||
def pymodule_out_exists(filename, optimization=None):
|
||||
return os.path.exists(make_pymodule_path(filename,
|
||||
optimization=optimization))
|
||||
|
||||
def pymodule_out_remove(filename, optimization=None):
|
||||
os.remove(make_pymodule_path(filename, optimization=optimization))
|
||||
|
||||
def implementation():
|
||||
if platform.system().startswith("Java"):
|
||||
return "Jython"
|
||||
elif hasattr(sys, "pypy_version_info"):
|
||||
return "PyPy"
|
||||
else:
|
||||
return "CPython"
|
||||
|
||||
test_pyo = (implementation() == 'CPython')
|
||||
|
||||
def check_expected(result, expected, contains=False):
|
||||
if sys.version_info[0] >= 3:
|
||||
if isinstance(result,str):
|
||||
result = result.encode('ascii')
|
||||
@@ -21,13 +66,16 @@ def check_expected(result,expected):
|
||||
resultlines = result.splitlines()
|
||||
expectedlines = expected.splitlines()
|
||||
|
||||
|
||||
if len(resultlines) != len(expectedlines):
|
||||
return False
|
||||
|
||||
for rline,eline in zip(resultlines,expectedlines):
|
||||
if not rline.endswith(eline):
|
||||
return False
|
||||
if contains:
|
||||
if eline not in rline:
|
||||
return False
|
||||
else:
|
||||
if not rline.endswith(eline):
|
||||
return False
|
||||
return True
|
||||
|
||||
def run_import(module):
|
||||
@@ -40,6 +88,9 @@ class LexErrorWarningTests(unittest.TestCase):
|
||||
def setUp(self):
|
||||
sys.stderr = StringIO.StringIO()
|
||||
sys.stdout = StringIO.StringIO()
|
||||
if sys.hexversion >= 0x3020000:
|
||||
warnings.filterwarnings('ignore',category=ResourceWarning)
|
||||
|
||||
def tearDown(self):
|
||||
sys.stderr = sys.__stderr__
|
||||
sys.stdout = sys.__stdout__
|
||||
@@ -114,8 +165,13 @@ class LexErrorWarningTests(unittest.TestCase):
|
||||
def test_lex_re1(self):
|
||||
self.assertRaises(SyntaxError,run_import,"lex_re1")
|
||||
result = sys.stderr.getvalue()
|
||||
if sys.hexversion < 0x3050000:
|
||||
msg = "Invalid regular expression for rule 't_NUMBER'. unbalanced parenthesis\n"
|
||||
else:
|
||||
msg = "Invalid regular expression for rule 't_NUMBER'. missing ), unterminated subpattern at position 0"
|
||||
self.assert_(check_expected(result,
|
||||
"Invalid regular expression for rule 't_NUMBER'. unbalanced parenthesis\n"))
|
||||
msg,
|
||||
contains=True))
|
||||
|
||||
def test_lex_re2(self):
|
||||
self.assertRaises(SyntaxError,run_import,"lex_re2")
|
||||
@@ -126,9 +182,19 @@ class LexErrorWarningTests(unittest.TestCase):
|
||||
def test_lex_re3(self):
|
||||
self.assertRaises(SyntaxError,run_import,"lex_re3")
|
||||
result = sys.stderr.getvalue()
|
||||
# self.assert_(check_expected(result,
|
||||
# "Invalid regular expression for rule 't_POUND'. unbalanced parenthesis\n"
|
||||
# "Make sure '#' in rule 't_POUND' is escaped with '\\#'\n"))
|
||||
|
||||
if sys.hexversion < 0x3050000:
|
||||
msg = ("Invalid regular expression for rule 't_POUND'. unbalanced parenthesis\n"
|
||||
"Make sure '#' in rule 't_POUND' is escaped with '\\#'\n")
|
||||
else:
|
||||
msg = ("Invalid regular expression for rule 't_POUND'. missing ), unterminated subpattern at position 0\n"
|
||||
"ERROR: Make sure '#' in rule 't_POUND' is escaped with '\#'")
|
||||
self.assert_(check_expected(result,
|
||||
"Invalid regular expression for rule 't_POUND'. unbalanced parenthesis\n"
|
||||
"Make sure '#' in rule 't_POUND' is escaped with '\\#'\n"))
|
||||
msg,
|
||||
contains=True), result)
|
||||
|
||||
def test_lex_rule1(self):
|
||||
self.assertRaises(SyntaxError,run_import,"lex_rule1")
|
||||
@@ -294,6 +360,7 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
|
||||
def test_lex_optimize(self):
|
||||
try:
|
||||
os.remove("lextab.py")
|
||||
@@ -316,7 +383,6 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("lextab.py"))
|
||||
|
||||
|
||||
p = subprocess.Popen([sys.executable,'-O','lex_optimize.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
@@ -325,9 +391,10 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("lextab.pyo"))
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("lextab.pyo", 1))
|
||||
pymodule_out_remove("lextab.pyo", 1)
|
||||
|
||||
os.remove("lextab.pyo")
|
||||
p = subprocess.Popen([sys.executable,'-OO','lex_optimize.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
@@ -335,17 +402,19 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("lextab.pyo"))
|
||||
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("lextab.pyo", 2))
|
||||
try:
|
||||
os.remove("lextab.py")
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
os.remove("lextab.pyc")
|
||||
pymodule_out_remove("lextab.pyc")
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
os.remove("lextab.pyo")
|
||||
pymodule_out_remove("lextab.pyo", 2)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
@@ -377,8 +446,9 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("opt2tab.pyo"))
|
||||
os.remove("opt2tab.pyo")
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("opt2tab.pyo", 1))
|
||||
pymodule_out_remove("opt2tab.pyo", 1)
|
||||
p = subprocess.Popen([sys.executable,'-OO','lex_optimize2.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
@@ -386,17 +456,18 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("opt2tab.pyo"))
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("opt2tab.pyo", 2))
|
||||
try:
|
||||
os.remove("opt2tab.py")
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
os.remove("opt2tab.pyc")
|
||||
pymodule_out_remove("opt2tab.pyc")
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
os.remove("opt2tab.pyo")
|
||||
pymodule_out_remove("opt2tab.pyo", 2)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
@@ -425,8 +496,10 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("lexdir/sub/calctab.pyo"))
|
||||
os.remove("lexdir/sub/calctab.pyo")
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("lexdir/sub/calctab.pyo", 1))
|
||||
pymodule_out_remove("lexdir/sub/calctab.pyo", 1)
|
||||
|
||||
p = subprocess.Popen([sys.executable,'-OO','lex_optimize3.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
@@ -434,12 +507,33 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(PLUS,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("lexdir/sub/calctab.pyo"))
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("lexdir/sub/calctab.pyo", 2))
|
||||
try:
|
||||
shutil.rmtree("lexdir")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def test_lex_optimize4(self):
|
||||
|
||||
# Regression test to make sure that reflags works correctly
|
||||
# on Python 3.
|
||||
|
||||
for extension in ['py', 'pyc']:
|
||||
try:
|
||||
os.remove("opt4tab.{0}".format(extension))
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
run_import("lex_optimize4")
|
||||
run_import("lex_optimize4")
|
||||
|
||||
for extension in ['py', 'pyc']:
|
||||
try:
|
||||
os.remove("opt4tab.{0}".format(extension))
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def test_lex_opt_alias(self):
|
||||
try:
|
||||
os.remove("aliastab.py")
|
||||
@@ -468,8 +562,10 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(+,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("aliastab.pyo"))
|
||||
os.remove("aliastab.pyo")
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("aliastab.pyo", 1))
|
||||
pymodule_out_remove("aliastab.pyo", 1)
|
||||
|
||||
p = subprocess.Popen([sys.executable,'-OO','lex_opt_alias.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
@@ -477,17 +573,19 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
"(NUMBER,3,1,0)\n"
|
||||
"(+,'+',1,1)\n"
|
||||
"(NUMBER,4,1,2)\n"))
|
||||
self.assert_(os.path.exists("aliastab.pyo"))
|
||||
|
||||
if test_pyo:
|
||||
self.assert_(pymodule_out_exists("aliastab.pyo", 2))
|
||||
try:
|
||||
os.remove("aliastab.py")
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
os.remove("aliastab.pyc")
|
||||
pymodule_out_remove("aliastab.pyc")
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
os.remove("aliastab.pyo")
|
||||
pymodule_out_remove("aliastab.pyo", 2)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
@@ -518,21 +616,22 @@ class LexBuildOptionTests(unittest.TestCase):
|
||||
|
||||
self.assert_(os.path.exists("manytab.py"))
|
||||
|
||||
p = subprocess.Popen([sys.executable,'-O','lex_many_tokens.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
self.assert_(check_expected(result,
|
||||
"(TOK34,'TOK34:',1,0)\n"
|
||||
"(TOK143,'TOK143:',1,7)\n"
|
||||
"(TOK269,'TOK269:',1,15)\n"
|
||||
"(TOK372,'TOK372:',1,23)\n"
|
||||
"(TOK452,'TOK452:',1,31)\n"
|
||||
"(TOK561,'TOK561:',1,39)\n"
|
||||
"(TOK999,'TOK999:',1,47)\n"
|
||||
))
|
||||
if implementation() == 'CPython':
|
||||
p = subprocess.Popen([sys.executable,'-O','lex_many_tokens.py'],
|
||||
stdout=subprocess.PIPE)
|
||||
result = p.stdout.read()
|
||||
self.assert_(check_expected(result,
|
||||
"(TOK34,'TOK34:',1,0)\n"
|
||||
"(TOK143,'TOK143:',1,7)\n"
|
||||
"(TOK269,'TOK269:',1,15)\n"
|
||||
"(TOK372,'TOK372:',1,23)\n"
|
||||
"(TOK452,'TOK452:',1,31)\n"
|
||||
"(TOK561,'TOK561:',1,39)\n"
|
||||
"(TOK999,'TOK999:',1,47)\n"
|
||||
))
|
||||
|
||||
self.assert_(os.path.exists("manytab.pyo"))
|
||||
os.remove("manytab.pyo")
|
||||
self.assert_(pymodule_out_exists("manytab.pyo", 1))
|
||||
pymodule_out_remove("manytab.pyo", 1)
|
||||
try:
|
||||
os.remove("manytab.py")
|
||||
except OSError:
|
||||
|
||||
@@ -8,28 +8,68 @@ except ImportError:
|
||||
|
||||
import sys
|
||||
import os
|
||||
import warnings
|
||||
import re
|
||||
import platform
|
||||
|
||||
sys.path.insert(0,"..")
|
||||
sys.tracebacklimit = 0
|
||||
|
||||
import ply.yacc
|
||||
|
||||
def check_expected(result,expected):
|
||||
resultlines = []
|
||||
def make_pymodule_path(filename):
|
||||
path = os.path.dirname(filename)
|
||||
file = os.path.basename(filename)
|
||||
mod, ext = os.path.splitext(file)
|
||||
|
||||
if sys.hexversion >= 0x3040000:
|
||||
import importlib.util
|
||||
fullpath = importlib.util.cache_from_source(filename, ext=='.pyc')
|
||||
elif sys.hexversion >= 0x3020000:
|
||||
import imp
|
||||
modname = mod+"."+imp.get_tag()+ext
|
||||
fullpath = os.path.join(path,'__pycache__',modname)
|
||||
else:
|
||||
fullpath = filename
|
||||
return fullpath
|
||||
|
||||
def pymodule_out_exists(filename):
|
||||
return os.path.exists(make_pymodule_path(filename))
|
||||
|
||||
def pymodule_out_remove(filename):
|
||||
os.remove(make_pymodule_path(filename))
|
||||
|
||||
def implementation():
|
||||
if platform.system().startswith("Java"):
|
||||
return "Jython"
|
||||
elif hasattr(sys, "pypy_version_info"):
|
||||
return "PyPy"
|
||||
else:
|
||||
return "CPython"
|
||||
|
||||
# Check the output to see if it contains all of a set of expected output lines.
|
||||
# This alternate implementation looks weird, but is needed to properly handle
|
||||
# some variations in error message order that occurs due to dict hash table
|
||||
# randomization that was introduced in Python 3.3
|
||||
def check_expected(result, expected):
|
||||
# Normalize 'state n' text to account for randomization effects in Python 3.3
|
||||
expected = re.sub(r' state \d+', 'state <n>', expected)
|
||||
result = re.sub(r' state \d+', 'state <n>', result)
|
||||
|
||||
resultlines = set()
|
||||
for line in result.splitlines():
|
||||
if line.startswith("WARNING: "):
|
||||
line = line[9:]
|
||||
elif line.startswith("ERROR: "):
|
||||
line = line[7:]
|
||||
resultlines.append(line)
|
||||
resultlines.add(line)
|
||||
|
||||
expectedlines = expected.splitlines()
|
||||
if len(resultlines) != len(expectedlines):
|
||||
return False
|
||||
for rline,eline in zip(resultlines,expectedlines):
|
||||
if not rline.endswith(eline):
|
||||
return False
|
||||
return True
|
||||
# Selectively remove expected lines from the output
|
||||
for eline in expected.splitlines():
|
||||
resultlines = set(line for line in resultlines if not line.endswith(eline))
|
||||
|
||||
# Return True if no result lines remain
|
||||
return not bool(resultlines)
|
||||
|
||||
def run_import(module):
|
||||
code = "import "+module
|
||||
@@ -43,10 +83,14 @@ class YaccErrorWarningTests(unittest.TestCase):
|
||||
sys.stdout = StringIO.StringIO()
|
||||
try:
|
||||
os.remove("parsetab.py")
|
||||
os.remove("parsetab.pyc")
|
||||
pymodule_out_remove("parsetab.pyc")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if sys.hexversion >= 0x3020000:
|
||||
warnings.filterwarnings('ignore', category=ResourceWarning)
|
||||
warnings.filterwarnings('ignore', category=DeprecationWarning)
|
||||
|
||||
def tearDown(self):
|
||||
sys.stderr = sys.__stderr__
|
||||
sys.stdout = sys.__stdout__
|
||||
@@ -148,7 +192,38 @@ class YaccErrorWarningTests(unittest.TestCase):
|
||||
self.assert_(check_expected(result,
|
||||
"yacc_error4.py:62: Illegal rule name 'error'. Already defined as a token\n"
|
||||
))
|
||||
|
||||
|
||||
|
||||
def test_yacc_error5(self):
|
||||
run_import("yacc_error5")
|
||||
result = sys.stdout.getvalue()
|
||||
self.assert_(check_expected(result,
|
||||
"Group at 3:10 to 3:12\n"
|
||||
"Undefined name 'a'\n"
|
||||
"Syntax error at 'b'\n"
|
||||
"Syntax error at 4:18 to 4:22\n"
|
||||
"Assignment Error at 2:5 to 5:27\n"
|
||||
"13\n"
|
||||
))
|
||||
|
||||
def test_yacc_error6(self):
|
||||
run_import("yacc_error6")
|
||||
result = sys.stdout.getvalue()
|
||||
self.assert_(check_expected(result,
|
||||
"a=7\n"
|
||||
"Line 3: Syntax error at '*'\n"
|
||||
"c=21\n"
|
||||
))
|
||||
|
||||
def test_yacc_error7(self):
|
||||
run_import("yacc_error7")
|
||||
result = sys.stdout.getvalue()
|
||||
self.assert_(check_expected(result,
|
||||
"a=7\n"
|
||||
"Line 3: Syntax error at '*'\n"
|
||||
"c=21\n"
|
||||
))
|
||||
|
||||
def test_yacc_inf(self):
|
||||
self.assertRaises(ply.yacc.YaccError,run_import,"yacc_inf")
|
||||
result = sys.stderr.getvalue()
|
||||
@@ -261,6 +336,7 @@ class YaccErrorWarningTests(unittest.TestCase):
|
||||
self.assert_(check_expected(result,
|
||||
"Generating LALR tables\n"
|
||||
))
|
||||
|
||||
def test_yacc_sr(self):
|
||||
run_import("yacc_sr")
|
||||
result = sys.stderr.getvalue()
|
||||
@@ -276,6 +352,13 @@ class YaccErrorWarningTests(unittest.TestCase):
|
||||
"yacc_term1.py:24: Illegal rule name 'NUMBER'. Already defined as a token\n"
|
||||
))
|
||||
|
||||
def test_yacc_unicode_literals(self):
|
||||
run_import("yacc_unicode_literals")
|
||||
result = sys.stderr.getvalue()
|
||||
self.assert_(check_expected(result,
|
||||
"Generating LALR tables\n"
|
||||
))
|
||||
|
||||
def test_yacc_unused(self):
|
||||
self.assertRaises(ply.yacc.YaccError,run_import,"yacc_unused")
|
||||
result = sys.stderr.getvalue()
|
||||
@@ -297,7 +380,6 @@ class YaccErrorWarningTests(unittest.TestCase):
|
||||
def test_yacc_uprec(self):
|
||||
self.assertRaises(ply.yacc.YaccError,run_import,"yacc_uprec")
|
||||
result = sys.stderr.getvalue()
|
||||
print repr(result)
|
||||
self.assert_(check_expected(result,
|
||||
"yacc_uprec.py:37: Nothing known about the precedence of 'UMINUS'\n"
|
||||
))
|
||||
@@ -319,6 +401,52 @@ class YaccErrorWarningTests(unittest.TestCase):
|
||||
"Precedence rule 'left' defined for unknown symbol '/'\n"
|
||||
))
|
||||
|
||||
def test_pkg_test1(self):
|
||||
from pkg_test1 import parser
|
||||
self.assertTrue(os.path.exists('pkg_test1/parsing/parsetab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test1/parsing/lextab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test1/parsing/parser.out'))
|
||||
r = parser.parse('3+4+5')
|
||||
self.assertEqual(r, 12)
|
||||
|
||||
def test_pkg_test2(self):
|
||||
from pkg_test2 import parser
|
||||
self.assertTrue(os.path.exists('pkg_test2/parsing/calcparsetab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test2/parsing/calclextab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test2/parsing/parser.out'))
|
||||
r = parser.parse('3+4+5')
|
||||
self.assertEqual(r, 12)
|
||||
|
||||
def test_pkg_test3(self):
|
||||
from pkg_test3 import parser
|
||||
self.assertTrue(os.path.exists('pkg_test3/generated/parsetab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test3/generated/lextab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test3/generated/parser.out'))
|
||||
r = parser.parse('3+4+5')
|
||||
self.assertEqual(r, 12)
|
||||
|
||||
def test_pkg_test4(self):
|
||||
from pkg_test4 import parser
|
||||
self.assertFalse(os.path.exists('pkg_test4/parsing/parsetab.py'))
|
||||
self.assertFalse(os.path.exists('pkg_test4/parsing/lextab.py'))
|
||||
self.assertFalse(os.path.exists('pkg_test4/parsing/parser.out'))
|
||||
r = parser.parse('3+4+5')
|
||||
self.assertEqual(r, 12)
|
||||
|
||||
def test_pkg_test5(self):
|
||||
from pkg_test5 import parser
|
||||
self.assertTrue(os.path.exists('pkg_test5/parsing/parsetab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test5/parsing/lextab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test5/parsing/parser.out'))
|
||||
r = parser.parse('3+4+5')
|
||||
self.assertEqual(r, 12)
|
||||
|
||||
def test_pkg_test6(self):
|
||||
from pkg_test6 import parser
|
||||
self.assertTrue(os.path.exists('pkg_test6/parsing/parsetab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test6/parsing/lextab.py'))
|
||||
self.assertTrue(os.path.exists('pkg_test6/parsing/parser.out'))
|
||||
r = parser.parse('3+4+5')
|
||||
self.assertEqual(r, 12)
|
||||
|
||||
|
||||
unittest.main()
|
||||
|
||||
94
ext/ply/test/yacc_error5.py
Normal file
94
ext/ply/test/yacc_error5.py
Normal file
@@ -0,0 +1,94 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_error5.py
|
||||
#
|
||||
# Lineno and position tracking with error tokens
|
||||
# -----------------------------------------------------------------------------
|
||||
import sys
|
||||
|
||||
if ".." not in sys.path: sys.path.insert(0,"..")
|
||||
import ply.yacc as yacc
|
||||
|
||||
from calclex import tokens
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_assign_error(t):
|
||||
'statement : NAME EQUALS error'
|
||||
line_start, line_end = t.linespan(3)
|
||||
pos_start, pos_end = t.lexspan(3)
|
||||
print("Assignment Error at %d:%d to %d:%d" % (line_start,pos_start,line_end,pos_end))
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
print(t[1])
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
line_start, line_end = t.linespan(2)
|
||||
pos_start, pos_end = t.lexspan(2)
|
||||
print("Group at %d:%d to %d:%d" % (line_start,pos_start, line_end, pos_end))
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_group_error(t):
|
||||
'expression : LPAREN error RPAREN'
|
||||
line_start, line_end = t.linespan(2)
|
||||
pos_start, pos_end = t.lexspan(2)
|
||||
print("Syntax error at %d:%d to %d:%d" % (line_start,pos_start, line_end, pos_end))
|
||||
t[0] = 0
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
parser = yacc.yacc()
|
||||
import calclex
|
||||
calclex.lexer.lineno=1
|
||||
parser.parse("""
|
||||
a = 3 +
|
||||
(4*5) +
|
||||
(a b c) +
|
||||
+ 6 + 7
|
||||
""", tracking=True)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
80
ext/ply/test/yacc_error6.py
Normal file
80
ext/ply/test/yacc_error6.py
Normal file
@@ -0,0 +1,80 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_error6.py
|
||||
#
|
||||
# Panic mode recovery test
|
||||
# -----------------------------------------------------------------------------
|
||||
import sys
|
||||
|
||||
if ".." not in sys.path: sys.path.insert(0,"..")
|
||||
import ply.yacc as yacc
|
||||
|
||||
from calclex import tokens
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
def p_statements(t):
|
||||
'statements : statements statement'
|
||||
pass
|
||||
|
||||
def p_statements_1(t):
|
||||
'statements : statement'
|
||||
pass
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : LPAREN NAME EQUALS expression RPAREN'
|
||||
print("%s=%s" % (p[2],p[4]))
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : LPAREN expression RPAREN'
|
||||
print(t[1])
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_error(p):
|
||||
if p:
|
||||
print("Line %d: Syntax error at '%s'" % (p.lineno, p.value))
|
||||
# Scan ahead looking for a name token
|
||||
while True:
|
||||
tok = parser.token()
|
||||
if not tok or tok.type == 'RPAREN':
|
||||
break
|
||||
if tok:
|
||||
parser.restart()
|
||||
return None
|
||||
|
||||
parser = yacc.yacc()
|
||||
import calclex
|
||||
calclex.lexer.lineno=1
|
||||
|
||||
parser.parse("""
|
||||
(a = 3 + 4)
|
||||
(b = 4 + * 5 - 6 + *)
|
||||
(c = 10 + 11)
|
||||
""")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
80
ext/ply/test/yacc_error7.py
Normal file
80
ext/ply/test/yacc_error7.py
Normal file
@@ -0,0 +1,80 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_error7.py
|
||||
#
|
||||
# Panic mode recovery test using deprecated functionality
|
||||
# -----------------------------------------------------------------------------
|
||||
import sys
|
||||
|
||||
if ".." not in sys.path: sys.path.insert(0,"..")
|
||||
import ply.yacc as yacc
|
||||
|
||||
from calclex import tokens
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
def p_statements(t):
|
||||
'statements : statements statement'
|
||||
pass
|
||||
|
||||
def p_statements_1(t):
|
||||
'statements : statement'
|
||||
pass
|
||||
|
||||
def p_statement_assign(p):
|
||||
'statement : LPAREN NAME EQUALS expression RPAREN'
|
||||
print("%s=%s" % (p[2],p[4]))
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : LPAREN expression RPAREN'
|
||||
print(t[1])
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_error(p):
|
||||
if p:
|
||||
print("Line %d: Syntax error at '%s'" % (p.lineno, p.value))
|
||||
# Scan ahead looking for a name token
|
||||
while True:
|
||||
tok = yacc.token()
|
||||
if not tok or tok.type == 'RPAREN':
|
||||
break
|
||||
if tok:
|
||||
yacc.restart()
|
||||
return None
|
||||
|
||||
parser = yacc.yacc()
|
||||
import calclex
|
||||
calclex.lexer.lineno=1
|
||||
|
||||
parser.parse("""
|
||||
(a = 3 + 4)
|
||||
(b = 4 + * 5 - 6 + *)
|
||||
(c = 10 + 11)
|
||||
""")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -12,8 +12,8 @@ from calclex import tokens
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','+','-'),
|
||||
('left','*','/'),
|
||||
('left', '+', '-'),
|
||||
('left', '*', '/'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
|
||||
70
ext/ply/test/yacc_unicode_literals.py
Normal file
70
ext/ply/test/yacc_unicode_literals.py
Normal file
@@ -0,0 +1,70 @@
|
||||
# -----------------------------------------------------------------------------
|
||||
# yacc_unicode_literals
|
||||
#
|
||||
# Test for unicode literals on Python 2.x
|
||||
# -----------------------------------------------------------------------------
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import sys
|
||||
|
||||
if ".." not in sys.path: sys.path.insert(0,"..")
|
||||
import ply.yacc as yacc
|
||||
|
||||
from calclex import tokens
|
||||
|
||||
# Parsing rules
|
||||
precedence = (
|
||||
('left','PLUS','MINUS'),
|
||||
('left','TIMES','DIVIDE'),
|
||||
('right','UMINUS'),
|
||||
)
|
||||
|
||||
# dictionary of names
|
||||
names = { }
|
||||
|
||||
def p_statement_assign(t):
|
||||
'statement : NAME EQUALS expression'
|
||||
names[t[1]] = t[3]
|
||||
|
||||
def p_statement_expr(t):
|
||||
'statement : expression'
|
||||
print(t[1])
|
||||
|
||||
def p_expression_binop(t):
|
||||
'''expression : expression PLUS expression
|
||||
| expression MINUS expression
|
||||
| expression TIMES expression
|
||||
| expression DIVIDE expression'''
|
||||
if t[2] == '+' : t[0] = t[1] + t[3]
|
||||
elif t[2] == '-': t[0] = t[1] - t[3]
|
||||
elif t[2] == '*': t[0] = t[1] * t[3]
|
||||
elif t[2] == '/': t[0] = t[1] / t[3]
|
||||
|
||||
def p_expression_uminus(t):
|
||||
'expression : MINUS expression %prec UMINUS'
|
||||
t[0] = -t[2]
|
||||
|
||||
def p_expression_group(t):
|
||||
'expression : LPAREN expression RPAREN'
|
||||
t[0] = t[2]
|
||||
|
||||
def p_expression_number(t):
|
||||
'expression : NUMBER'
|
||||
t[0] = t[1]
|
||||
|
||||
def p_expression_name(t):
|
||||
'expression : NAME'
|
||||
try:
|
||||
t[0] = names[t[1]]
|
||||
except LookupError:
|
||||
print("Undefined name '%s'" % t[1])
|
||||
t[0] = 0
|
||||
|
||||
def p_error(t):
|
||||
print("Syntax error at '%s'" % t.value)
|
||||
|
||||
yacc.yacc()
|
||||
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user