lid (version 0.2) | index /Users/dcavar/Documents/Teaching/DGfS Herbstschule 2005/Code/LID/lid.py |
lid.py
(C) 2005 by Damir Cavar <dcavar@indiana.edu>
License:
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
Functionality:
1. Startup:
Lid loads all *.dat files in the current directory, assuming that
the files contain the tri-gram model of the language which is named
with the file name (e.g. japanese.dat, german.dat etc.).
2. Processing:
Lid processes all the files given as parameters to the script and prints
out the language of the text that the file contains.
Lid can be used within an application by importing the class and using its
methods as shown in the end of this code (the __main__ part):
myLid = Lid()
languagename = myLid("This is an English example.")
Basics:
Lid is based on a tri-gram model of a training corpus for a given language.
Use lidtrainer.py to generate such language models.
The language models are sets of three character sequences (tri-grams) extracted
from the training corpus, with their frequency. The probability of each
tri-gram is calculated (given the frequency of the tri-gram and the number
of all tri-grams in the corpus) and stored with the tri-gram in the language
model.
Lid generates all tri-grams for the test document and compares the probability
of each tri-gram with the probabilities the corresponding tri-grams in the
training corpus or the language model. For each tri-gram the deviation from
the corresponding tri-gram in the language model is calculated. If a tri-gram is
not found in the language model, the deviation is assumed to be maximal, i.e.
equal to 1.
The language model that has the minimal deviation score for the tri-grams in
the tested text is assumed to represent the language of the tested text.
This is a very simple but effective language ID strategy. It is developed for
teaching purposes. A real world application would require much more evaluation
of the significance of the deviations, optimization of the language models and
many many other things.
Please send your comments and suggestions!
Modules | ||||||
|
Classes | ||||||||||
|
Functions | ||
Data | ||
__author__ = 'Damir Cavar' __version__ = 0.20000000000000001 ascii_letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' ascii_lowercase = 'abcdefghijklmnopqrstuvwxyz' ascii_uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' digits = '0123456789' hexdigits = '0123456789abcdefABCDEF' letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' lowercase = 'abcdefghijklmnopqrstuvwxyz' octdigits = '01234567' printable = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' punctuation = '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~' uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' whitespace = '\t\n\x0b\x0c\r ' |
Author | ||
Damir Cavar |