Line numbers of parsed input are now automatically maintained. More
needed data structures are now automatically created in the API header
file.
Signed-off-by: Jan Lindemann <jan@janware.com>
First time parsing doesn't error out with a syntax error. No usable AST
is produced, strings are not returned from lexer, and AST lists aren't
lists, really.
TEXT:="Hello world!"; had to be excluded from the example, because I
don't get how this could be parsed with the given syntax. There's a
special sequence "all visible characters", but any lexer regex I could
think of will also match the types defining "alphabetic character" and
return the respective tokens (e.g. T_A) or vice-versa, depending on the
order in the lexer input file. I suppose, the only sensible way to
handle this, is to define "all visible characters" by defining the
tokens for the missing characters, and then use them along T_A ... T_Z
or their derived types.
Signed-off-by: Jan Lindemann <jan@janware.com>
Doesn't successfully parse grammartest.code, yet, it errors out with a
syntax error on whitespace. But at least it compiles and starts.
Signed-off-by: Jan Lindemann <jan@janware.com>
More code is removed from the special parser directories and centralized
into grammar.py, Cmd.py, and generate-flex-bison.mk.
Signed-off-by: Jan Lindemann <jan@janware.com>