next up previous
Next: Prior Knowledge Up: First Experiment Previous: Small Disjuncts are beneficial:

Comparison with TBL:

Transformation-based learning [see][]Brillcl is a now popular symbolic machine learning approach. Since it has been applied to the same task [see][]RM95, it is interesting to compare both algorithms. Furthermore, TBL principles are very similar in some ways: learning transformations can be seen as learning our refinement rules (or chronologically the other way around). Two main differences have to be mentioned. First, rules are not learned in the same order. In TBL, a rule is learned because its application over the data introduces the best improvement. A rule which fixes a preceding rule (equivalent to our refinement rule) will be learned after any number of intermediate rules. In ALLiS, refinement rules are learned immediately after the ``mother rule'', and both rules are explicitly linked.

A second major difference is the kind of information used by TBL. TBL first tags data using the most frequent category of the elements. The system can then use the category of adjacent elements (left and right), information very useful that ALLiS does not use. ALLiS could use the category of the preceding elements, since this information will be available during parsing, but we do not incorporate this information into the features3. Both systems do not use the same information, but they provide comparable results for this task (NP extraction) (Table 5). Besides, ALLiS only generates 1369 rules whereas TBL stops after 2000 rules. The quality of the rules seems then to be better. Table 5 also shows the improvement due to the use of the lexicalized information (W feature).


Table 5: Comparison between TBL and ALLiS.
($\theta $, lg) w/o words with words
(0.50,1) 90.70 92.18
TBL 90.60 92.03


Another interesting point in TBL is the absence of threshold, since, at each iteration, the best rule is chosen. No threshold equivalent to our $\theta $ is needed. The stopping criterion is determined by the maximum number of rules the system learns (after n rules, it stops). Increasing the number of rules generally improves results.

After this first experiment using no prior knowledge, we now present the linguistic knowledge we will used in order to improve ALLiS.


next up previous
Next: Prior Knowledge Up: First Experiment Previous: Small Disjuncts are beneficial:
Hammerton J. 2002-03-13