Package epydoc :: Package markup :: Module epytext
[show private | hide private]
[frames | no frames]

Module epydoc.markup.epytext

Parser for epytext strings. Epytext is a lightweight markup whose primary intended application is Python documentation strings. This parser converts Epytext strings to a XML/DOM representation. Epytext strings can contain the following structural blocks: Additionally, the following inline regions may be used within para blocks: The returned DOM tree will conform to the the following Document Type Description:
  <!ENTITY % colorized '(code | math | index | italic |
                         bold | uri | link | symbol)*'>

  <!ELEMENT epytext ((para | literalblock | doctestblock |
                     section | ulist | olist)*, fieldlist?)>

  <!ELEMENT para (#PCDATA | %colorized;)*>

  <!ELEMENT section (para | listblock | doctestblock |
                     section | ulist | olist)+>

  <!ELEMENT fieldlist (field+)>
  <!ELEMENT field (tag, arg?, (para | listblock | doctestblock)
                               ulist | olist)+)>
  <!ELEMENT tag (#PCDATA)>
  <!ELEMENT arg (#PCDATA)>
  
  <!ELEMENT literalblock (#PCDATA)>
  <!ELEMENT doctestblock (#PCDATA)>

  <!ELEMENT ulist (li+)>
  <!ELEMENT olist (li+)>
  <!ELEMENT li (para | literalblock | doctestblock | ulist | olist)+>
  <!ATTLIST li bullet NMTOKEN #IMPLIED>
  <!ATTLIST olist start NMTOKEN #IMPLIED>

  <!ELEMENT uri     (name, target)>
  <!ELEMENT link    (name, target)>
  <!ELEMENT name    (#PCDATA | %colorized;)*>
  <!ELEMENT target  (#PCDATA)>
  
  <!ELEMENT code    (#PCDATA | %colorized;)*>
  <!ELEMENT math    (#PCDATA | %colorized;)*>
  <!ELEMENT italic  (#PCDATA | %colorized;)*>
  <!ELEMENT bold    (#PCDATA | %colorized;)*>
  <!ELEMENT indexed (#PCDATA | %colorized;)>

  <!ELEMENT symbol (#PCDATA)>

Classes
ParsedEpytextDocstring  
Token Tokens are an intermediate data structure used while constructing the structuring DOM tree for a formatted docstring.

Exceptions
ColorizingError An error generated while colorizing a paragraph.
StructuringError An error generated while structuring a formatted documentation string.
TokenizationError An error generated while tokenizing a formatted documentation string.

Function Summary
xml.dom.minidom.Document parse(str, errors)
Return a DOM tree encoding the contents of an epytext string.
xml.dom.minidom.Document parse_as_literal(str)
Return a DOM document matching the epytext DTD, containing a single literal block.
xml.dom.minidom.Document parse_as_para(str)
Return a DOM document matching the epytext DTD, containing a single paragraph.
ParsedDocstring parse_docstring(docstring, errors, **options)
Parse the given docstring, which is formatted using epytext; and return a ParsedDocstring representation of its contents.
xml.dom.minidom.Document pparse(str, show_warnings, show_errors, stream)
Pretty-parse the string.
string to_debug(tree, indent, seclevel)
Convert a DOM document encoding epytext back to an epytext string, annotated with extra debugging information.
string to_epytext(tree, indent, seclevel)
Convert a DOM document encoding epytext back to an epytext string.
string to_plaintext(tree, indent, seclevel)
Convert a DOM document encoding epytext to a string representation.
  _add_list(doc, bullet_token, stack, indent_stack, errors)
Add a new list item or field to the DOM tree, with the given bullet or field tag.
  _add_para(doc, para_token, stack, indent_stack, errors)
Colorize the given paragraph, and add it to the DOM tree.
  _add_section(doc, heading_token, stack, indent_stack, errors)
Add a new section to the DOM tree, with the given heading.
Element _colorize(doc, token, errors, tagName)
Given a string containing the contents of a paragraph, produce a DOM Element encoding that paragraph.
  _colorize_link(doc, link, token, end, errors)
  _pop_completed_blocks(token, stack, indent_stack)
Pop any completed blocks off the stack.
list of Token _tokenize(str, errors)
Split a given formatted docstring into an ordered list of Tokens, according to the epytext markup rules.
int _tokenize_doctest(lines, start, block_indent, tokens, errors)
Construct a Token containing the doctest block starting at lines[start], and append it to tokens.
int _tokenize_listart(lines, start, bullet_indent, tokens, errors)
Construct Tokens for the bullet and the first paragraph of the list item (or field) starting at lines[start], and append them to tokens.
int _tokenize_literal(lines, start, block_indent, tokens, errors)
Construct a Token containing the literal block starting at lines[start], and append it to tokens.
int _tokenize_para(lines, start, para_indent, tokens, errors)
Construct a Token containing the paragraph starting at lines[start], and append it to tokens.

Variable Summary
list SYMBOLS: A list of the of escape symbols that are supported by epydoc.
SRE_Pattern _BRACE_RE = [\{\}]
SRE_Pattern _BULLET_RE = -( +|$)|(\d+\.)+( +|$)|@\w+( [^\{\}:\n]+)?:...
dict _COLORIZING_TAGS = {'S': 'symbol', 'C': 'code', 'B': 'bo...
dict _ESCAPES = {'lb': '{', 'rb': '}'}
SRE_Pattern _FIELD_BULLET_RE = @\w+( [^\{\}:\n]+)?:( +|$)
str _HEADING_CHARS = '=-~'
list _LINK_COLORIZING_TAGS = ['link', 'uri']
SRE_Pattern _LIST_BULLET_RE = -( +|$)|(\d+\.)+( +|$)
dict _SYMBOLS = {'xi': 1, '>=': 1, 'lArr': 1, 'Chi': 1, 'omeg...
SRE_Pattern _TARGET_RE = ^(.*?)\s*<(?:URI:|L:)?([^<>]+)>$

Function Details

parse(str, errors=None)

Return a DOM tree encoding the contents of an epytext string. Any errors generated during parsing will be stored in errors.
Parameters:
str - The epytext string to parse.
           (type=string)
errors - A list where any errors generated during parsing will be stored. If no list is specified, then fatal errors will generate exceptions, and non-fatal errors will be ignored.
           (type=list of ParseError)
Returns:
a DOM tree encoding the contents of an epytext string.
           (type=xml.dom.minidom.Document)
Raises:
ParseError - If errors is None and an error is encountered while parsing.

parse_as_literal(str)

Return a DOM document matching the epytext DTD, containing a single literal block. That literal block will include the contents of the given string. This method is typically used as a fall-back when the parser fails.
Parameters:
str - The string which should be enclosed in a literal block.
           (type=string)
Returns:
A DOM document containing str in a single literal block.
           (type=xml.dom.minidom.Document)

parse_as_para(str)

Return a DOM document matching the epytext DTD, containing a single paragraph. That paragraph will include the contents of the given string. This can be used to wrap some forms of automatically generated information (such as type names) in paragraphs.
Parameters:
str - The string which should be enclosed in a paragraph.
           (type=string)
Returns:
A DOM document containing str in a single paragraph.
           (type=xml.dom.minidom.Document)

parse_docstring(docstring, errors, **options)

Parse the given docstring, which is formatted using epytext; and return a ParsedDocstring representation of its contents.
Parameters:
docstring - The docstring to parse
           (type=string)
errors - A list where any errors generated during parsing will be stored.
           (type=list of ParseError)
options - Extra options. Unknown options are ignored. Currently, no extra options are defined.
Returns:
ParsedDocstring

pparse(str, show_warnings=1, show_errors=1, stream=<cStringIO.StringO object at 0x8504f68>)

Pretty-parse the string. This parses the string, and catches any warnings or errors produced. Any warnings and errors are displayed, and the resulting DOM parse structure is returned.
Parameters:
str - The string to parse.
           (type=string)
show_warnings - Whether or not to display non-fatal errors generated by parsing str.
           (type=boolean)
show_errors - Whether or not to display fatal errors generated by parsing str.
           (type=boolean)
stream - The stream that warnings and errors should be written to.
           (type=stream)
Returns:
a DOM document encoding the contents of str.
           (type=xml.dom.minidom.Document)
Raises:
SyntaxError - If any fatal errors were encountered.

to_debug(tree, indent=4, seclevel=0)

Convert a DOM document encoding epytext back to an epytext string, annotated with extra debugging information. This function is similar to to_epytext, but it adds explicit information about where different blocks begin, along the left margin.
Parameters:
tree - A DOM document encoding of an epytext string.
           (type=xml.dom.minidom.Document)
indent - The indentation for the string representation of tree. Each line of the returned string will begin with indent space characters.
           (type=int)
seclevel - The section level that tree appears at. This is used to generate section headings.
           (type=int)
Returns:
The epytext string corresponding to tree.
           (type=string)

to_epytext(tree, indent=0, seclevel=0)

Convert a DOM document encoding epytext back to an epytext string. This is the inverse operation from parse. I.e., assuming there are no errors, the following is true:
  • parse(to_epytext(tree)) == tree
The inverse is true, except that whitespace, line wrapping, and character escaping may be done differently.
  • to_epytext(parse(str)) == str (approximately)
Parameters:
tree - A DOM document encoding of an epytext string.
           (type=xml.dom.minidom.Document)
indent - The indentation for the string representation of tree. Each line of the returned string will begin with indent space characters.
           (type=int)
seclevel - The section level that tree appears at. This is used to generate section headings.
           (type=int)
Returns:
The epytext string corresponding to tree.
           (type=string)

to_plaintext(tree, indent=0, seclevel=0)

Convert a DOM document encoding epytext to a string representation. This representation is similar to the string generated by to_epytext, but to_plaintext removes inline markup, prints escaped characters in unescaped form, etc.
Parameters:
tree - A DOM document encoding of an epytext string.
           (type=xml.dom.minidom.Document)
indent - The indentation for the string representation of tree. Each line of the returned string will begin with indent space characters.
           (type=int)
seclevel - The section level that tree appears at. This is used to generate section headings.
           (type=int)
Returns:
The epytext string corresponding to tree.
           (type=string)

_add_list(doc, bullet_token, stack, indent_stack, errors)

Add a new list item or field to the DOM tree, with the given bullet or field tag. When necessary, create the associated list.

_add_para(doc, para_token, stack, indent_stack, errors)

Colorize the given paragraph, and add it to the DOM tree.

_add_section(doc, heading_token, stack, indent_stack, errors)

Add a new section to the DOM tree, with the given heading.

_colorize(doc, token, errors, tagName='para')

Given a string containing the contents of a paragraph, produce a DOM Element encoding that paragraph. Colorized regions are represented using DOM Elements, and text is represented using DOM Texts.
Parameters:
errors - A list of errors. Any newly generated errors will be appended to this list.
           (type=list of string)
tagName - The element tag for the DOM Element that should be generated.
           (type=string)
Returns:
a DOM Element encoding the given paragraph.
           (type=Element)

_pop_completed_blocks(token, stack, indent_stack)

Pop any completed blocks off the stack. This includes any blocks that we have dedented past, as well as any list item blocks that we've dedented to. The top element on the stack should only be a list if we're about to start a new list item (i.e., if the next token is a bullet).

_tokenize(str, errors)

Split a given formatted docstring into an ordered list of Tokens, according to the epytext markup rules.
Parameters:
str - The epytext string
           (type=string)
errors - A list where any errors generated during parsing will be stored. If no list is specified, then errors will generate exceptions.
           (type=list of ParseError)
Returns:
a list of the Tokens that make up the given string.
           (type=list of Token)

_tokenize_doctest(lines, start, block_indent, tokens, errors)

Construct a Token containing the doctest block starting at lines[start], and append it to tokens. block_indent should be the indentation of the doctest block. Any errors generated while tokenizing the doctest block will be appended to errors.
Parameters:
lines - The list of lines to be tokenized
           (type=list of string)
start - The index into lines of the first line of the doctest block to be tokenized.
           (type=int)
block_indent - The indentation of lines[start]. This is the indentation of the doctest block.
           (type=int)
tokens
           (type=list of Token)
errors - A list where any errors generated during parsing will be stored. If no list is specified, then errors will generate exceptions.
           (type=list of ParseError)
Returns:
The line number of the first line following the doctest block.
           (type=int)

_tokenize_listart(lines, start, bullet_indent, tokens, errors)

Construct Tokens for the bullet and the first paragraph of the list item (or field) starting at lines[start], and append them to tokens. bullet_indent should be the indentation of the list item. Any errors generated while tokenizing will be appended to errors.
Parameters:
lines - The list of lines to be tokenized
           (type=list of string)
start - The index into lines of the first line of the list item to be tokenized.
           (type=int)
bullet_indent - The indentation of lines[start]. This is the indentation of the list item.
           (type=int)
tokens
           (type=list of Token)
errors - A list of the errors generated by parsing. Any new errors generated while will tokenizing this paragraph will be appended to this list.
           (type=list of ParseError)
Returns:
The line number of the first line following the list item's first paragraph.
           (type=int)

_tokenize_literal(lines, start, block_indent, tokens, errors)

Construct a Token containing the literal block starting at lines[start], and append it to tokens. block_indent should be the indentation of the literal block. Any errors generated while tokenizing the literal block will be appended to errors.
Parameters:
lines - The list of lines to be tokenized
           (type=list of string)
start - The index into lines of the first line of the literal block to be tokenized.
           (type=int)
block_indent - The indentation of lines[start]. This is the indentation of the literal block.
           (type=int)
tokens
           (type=list of Token)
errors - A list of the errors generated by parsing. Any new errors generated while will tokenizing this paragraph will be appended to this list.
           (type=list of ParseError)
Returns:
The line number of the first line following the literal block.
           (type=int)

_tokenize_para(lines, start, para_indent, tokens, errors)

Construct a Token containing the paragraph starting at lines[start], and append it to tokens. para_indent should be the indentation of the paragraph . Any errors generated while tokenizing the paragraph will be appended to errors.
Parameters:
lines - The list of lines to be tokenized
           (type=list of string)
start - The index into lines of the first line of the paragraph to be tokenized.
           (type=int)
para_indent - The indentation of lines[start]. This is the indentation of the paragraph.
           (type=int)
tokens
           (type=list of Token)
errors - A list of the errors generated by parsing. Any new errors generated while will tokenizing this paragraph will be appended to this list.
           (type=list of ParseError)
Returns:
The line number of the first line following the paragraph.
           (type=int)

Variable Details

SYMBOLS

A list of the of escape symbols that are supported by epydoc. Currently the following symbols are supported: S{<-}=←; S{->}=→; S{^}=↑; S{v}=↓; S{alpha}=α; S{beta}=β; S{gamma}=γ; S{delta}=δ; S{epsilon}=ε; S{zeta}=ζ; S{eta}=η; S{theta}=θ; S{iota}=ι; S{kappa}=κ; S{lambda}=λ; S{mu}=μ; S{nu}=ν; S{xi}=ξ; S{omicron}=ο; S{pi}=π; S{rho}=ρ; S{sigma}=σ; S{tau}=τ; S{upsilon}=υ; S{phi}=φ; S{chi}=χ; S{psi}=ψ; S{omega}=ω; S{Alpha}=Α; S{Beta}=Β; S{Gamma}=Γ; S{Delta}=Δ; S{Epsilon}=Ε; S{Zeta}=Ζ; S{Eta}=Η; S{Theta}=Θ; S{Iota}=Ι; S{Kappa}=Κ; S{Lambda}=Λ; S{Mu}=Μ; S{Nu}=Ν; S{Xi}=Ξ; S{Omicron}=Ο; S{Pi}=Π; S{Rho}=Ρ; S{Sigma}=Σ; S{Tau}=Τ; S{Upsilon}=Υ; S{Phi}=Φ; S{Chi}=Χ; S{Psi}=Ψ; S{Omega}=Ω; S{larr}=←; S{rarr}=→; S{uarr}=↑; S{darr}=↓; S{harr}=↔; S{crarr}=↵; S{lArr}=⇐; S{rArr}=⇒; S{uArr}=⇑; S{dArr}=⇓; S{hArr}=⇔; S{copy}=©; S{times}=×; S{forall}=∀; S{exist}=∃; S{part}=∂; S{empty}=∅; S{isin}=∈; S{notin}=∉; S{ni}=∋; S{prod}=∏; S{sum}=∑; S{prop}=∝; S{infin}=∞; S{ang}=∠; S{and}=∧; S{or}=∨; S{cap}=∩; S{cup}=∪; S{int}=∫; S{there4}=∴; S{sim}=∼; S{cong}=≅; S{asymp}=≈; S{ne}=≠; S{equiv}=≡; S{le}=≤; S{ge}=≥; S{sub}=⊂; S{sup}=⊃; S{nsub}=⊄; S{sube}=⊆; S{supe}=⊇; S{oplus}=⊕; S{otimes}=⊗; S{perp}=⊥; S{infinity}=∞; S{integral}=∫; S{product}=∏; S{>=}=≥; S{<=}=≤
Type:
list
Value:
['<-', '->', '^', 'v', 'alpha', 'beta', 'gamma', 'delta', 'epsilon']   

_BRACE_RE

Type:
SRE_Pattern
Value:
[\{\}]                                                                 

_BULLET_RE

Type:
SRE_Pattern
Value:
-( +|$)|(\d+\.)+( +|$)|@\w+( [^\{\}:\n]+)?:( +|$)                      

_COLORIZING_TAGS

Type:
dict
Value:
{'B': 'bold',
 'C': 'code',
 'E': 'escape',
 'I': 'italic',
 'L': 'link',
 'M': 'math',
 'S': 'symbol',
 'U': 'uri',
...                                                                    

_ESCAPES

Type:
dict
Value:
{'lb': '{', 'rb': '}'}                                                 

_FIELD_BULLET_RE

Type:
SRE_Pattern
Value:
@\w+( [^\{\}:\n]+)?:( +|$)                                             

_HEADING_CHARS

Type:
str
Value:
'=-~'                                                                  

_LINK_COLORIZING_TAGS

Type:
list
Value:
['link', 'uri']                                                        

_LIST_BULLET_RE

Type:
SRE_Pattern
Value:
-( +|$)|(\d+\.)+( +|$)                                                 

_SYMBOLS

Type:
dict
Value:
{'>=': 1,
 'Chi': 1,
 'asymp': 1,
 'ge': 1,
 'lArr': 1,
 'omega': 1,
 'otimes': 1,
 'sube': 1,
...                                                                    

_TARGET_RE

Type:
SRE_Pattern
Value:
^(.*?)\s*<(?:URI:|L:)?([^<>]+)>$                                       

Generated by Epydoc 2.1 on Sat Mar 20 17:46:14 2004 http://epydoc.sf.net