MSTperl is a Perl reimplementation of the MST parser of Ryan McDonald (http://www.seas.upenn.edu/~strctlrn/MSTParser/MSTParser.html).
MST parser (Maximum Spanning Tree parser) is a state-of-the-art natural language dependency parser -- a tool that takes a sentence and returns its dependency tree.
In MSTperl, only some functionality was implemented; the limitations include the following:
the parser is a non-projective one, curently with no possibility of enforcing the requirement of projectivity of the parse trees;
only first-order features are supported, i.e. no second-order or third-order features are possible;
the implementation of MIRA is that of a single-best MIRA, with a closed-form update instead of using quadratic programming.
On the other hand, the parser supports several advanced features:
parallel features, i.e. enriching the parser input with word-aligned sentence in other language;
adding large-scale information, i.e. the feature set enriched with features corresponding to pointwise mutual information of word pairs in a large corpus (CzEng).
The MSTperl parser is tuned for parsing Czech. Trained models are available for Czech, English and German. We can train the parser for other languages on demand, or you can train it yourself -- the guidelines are part of the documentation.
The parser, together with detailed documentation, is avalable on CPAN (http://search.cpan.org/~rur/Treex-Parser-MSTperl/). and The research has been supported by the EU Seventh Framework Programme under grant agreement 247762 (Faust), and by the grants GAUK116310 and GA201/09/H057.
MSTperl is a Perl reimplementation of the MST parser of Ryan McDonald (http://www.seas.upenn.edu/~strctlrn/MSTParser/MSTParser.html).
MST parser (Maximum Spanning Tree parser) is a state-of-the-art natural language dependency parser -- a tool that takes a sentence and returns its dependency tree.
In MSTperl, only some functionality was implemented; the limitations include the following:
the parser is a non-projective one, curently with no possibility of enforcing the requirement of projectivity of the parse trees;
only first-order features are supported, i.e. no second-order or third-order features are possible;
the implementation of MIRA is that of a single-best MIRA, with a closed-form update instead of using quadratic programming.
On the other hand, the parser supports several advanced features:
parallel features, i.e. enriching the parser input with word-aligned sentence in other language;
adding large-scale information, i.e. the feature set enriched with features corresponding to pointwise mutual information of word pairs in a large corpus (CzEng);
weighted/unweighted parser model interpolation;
combination of several instances of the MSTperl parser (through MST algorithm);
combination of several existing parses from any parsers (through MST algorithm).
The MSTperl parser is tuned for parsing Czech. Trained models are available for Czech, English and German. We can train the parser for other languages on demand, or you can train it yourself -- the guidelines are part of the documentation.
The parser, together with detailed documentation, is avalable on CPAN (http://search.cpan.org/~rur/Treex-Parser-MSTperl/). and The research has been supported by the EU Seventh Framework Programme under grant agreement 247762 (Faust), and by the grants GAUK116310 and GA201/09/H057.