The files included here appear to be from 2005. I am trying to track down the 2007 and 2009 files.
We have had many requests for thesis formats already set up in TeX. None of the files are guaranteed, but they have worked successfully for previous graduates. The Graduate College changes thesis formatting requirements from time to time; as far as we know, these files conform to Fall 2009 requirements. However, be sure to consult the current version of the Graduate College Handbook.
The 2009 version of the template was prepared by a student who is himself in the process of writing his phd thesis and graduating. This template has thus not been validated in any way by the graduate college. There is still no indication that a thesis prepared with this template has been received and accepted by the graduate college. The student who wrote this package can make no guarantee that this template will comply to the current rules of the graduate college. This student is also busy, so support may be hit or miss until he graduates.
The 2007 version of the template was prepared by Tim Head, based on prior templates by Peter Czoschke and David Hull.
uiucthesis is a LaTeX package for formatting theses in the format required by the University of Illinois, effective Fall 2007.
Tim Head, Peter Czoschke, David Hull, Gregory Hart
TEMPLATE FOR PHYS250 WORKSHEET
The Memoir class is used to create a worksheet template for the PHYS250 Laboratory Course at Queen's University, Kingston, ON, Canada.
Last Update: 2017-01-31
This project focuses on a modification of a greedy transition based dependency parser. Typically a Part-Of-Speech (POS) tagger models a probability distribution over all the possible tags for each word in the given sentence and chooses one as its best guess. This is then pass on to the parser which uses this information to build a parse tree. The current state of the art for POS tagging is about 97% word accuracy, which seems high but results in a around 56% sentence accuracy. Small errors at the POS tagging phase can lead to large errors down the NLP pipeline and transition based parsers are particularity sensitive to these types of mistakes. A maximum entropy Markov model was trained as a POS multi-tagger passing more than its 1-best guess to the parser which was thought could make a better decision when committing to a parse for the sentence. This has been shown to give improved accuracy in other parsing approaches. We shown there is a correlation between tagging ambiguity and parsers accuracy and in fact the higher the average tags per word the higher the accuracy.