The history of luaTEX 2006–2009 / v 0.50 - Pragma ADE
The history of luaTEX 2006–2009 / v 0.50 - Pragma ADE The history of luaTEX 2006–2009 / v 0.50 - Pragma ADE
Interesting is that upto now I didn't really miss alternation but it may as well be that the lack of it drove me to come up with different solutions. For ConTEXt MkIV the MetaPost to pdf converter has been rewritten in Lua. This is a prelude to direct Lua output from MetaPost but I needed the exercise. It was among the rst Lua code in MkIV. Progressive (sequential) parsing of the data is an option, and is done in MkII using pure TEX. We collect words and compare them to PostScript directives and act accordingly. The messy parts are scanning the preamble, which has specials to be dealt with as well as lots of unpredictable code to skip, and the fshow command which adds text to a graphic. But real dirty are the code fragments that deal with setting the line width and penshapes so the cleanup of this takes some time. In Lua a different approach is taken. There is an mp table which collects a lot of functions that more or less reect the output of MetaPost. The functions take care of generating the right pdf code and also handle the transformations needed because of the differences between PostScript and pdf. The sequential PostScript that comes from MetaPost is collected in one string and converted using gsub into a sequence of Lua function calls. Before this can be done, some cleanup takes place. The resulting string is then executed as Lua code. As an example: 1 0 0 2 0 0 curveto becomes mp.curveto(1,0,0,2,0,0) which results in: \pdfliteral{1 0 0 2 0 0 c} In between, the path is stored and transformed which is needed in the case of penshapes, where some PostScript feature is used that is not available in pdf. During the development of LuaTEX a new feature was added to Lua: lpeg. With lpeg you can dene text scanners. In fact, you can build parsers for languages quite conveniently so without doubt we will see it show up all over MkIV. Since I needed an exercise to get accustomed with lpeg, I rewrote the mentioned converter. I'm sure that a better implementation is possible than I did (after all, PostScript is a language) but I went for a speedy solution. The following table shows some timings. gsub lpeg 58 How about performance
2.5 0.5 100 times test graphic 9.2 1.9 100 times big graphic The test graphic has about everything that MetaPost can output, including special tricks that deal with transparency and shading. The big one is just four copies of the test graphic. So, the lpeg based variant is about 5 times faster than the original variant. I'm not saying that the original implementation is that brilliant, but a 5 time improvement is rather nice especially when you consider that lpeg is still experimental and each version performs better. The tests were done with lpeg version 0.5 which performs slightly faster than its predecessor. It's worth mentioning that the original gsub based variant was already a bit improved compared to its rst implementation. There we collected the TEX (pdf) code in a table and passed it in its concatenated form to TEX. Because the Lua to TEX interface is by now quite efcient we can just pass the intermediate results directly to TEX. le io The repertore of functions that deal with individual characters in Lua is small. This does not bother us too much because the individual character is not what TEX is mostly dealing with. A character or sequence of characters becomes a token (internally represented by a table) and tokens result in nodes (again tables, but larger). There are many more tokens involved than nodes: in ConTEXt a ratio of 200 tokens on 1 node are not uncommon. A letter like x become a token, but the control sequence \command also ends up as one token. Later on, this x may become a character node, possibly surrounded by some kerning. The input characters width result in 5 tokens, but may not end up as nodes at all, for instance when they are part of a key/value pair in the argument to a command. Just as there is no guaranteed one--to--one relationship between input characters and tokens, there is no straight relation between tokens and nodes. When dealing with input it is good to keep in mind that because of these interpretation stages one can never say that 1 megabyte of input characters ends up as 1 million something in memory. Just think of how many megabytes of macros get stored in a format le much smaller than the sum of bytes. We only deal with characters or sequences of bytes when reading from an input medium. There are many ways to deal with the input. For instance one can process the input lines as TEX sees them, in which case TEX takes care of the utf input. When we're dealing with other input encodings we can hook code into the le openers and readers and convert the raw data ourselves. We can for instance read in a le as a whole, convert it using the normal expression handlers or the byte(pair) iterators that LuaTEX provides, or we can go real low level using native Lua code, as in: How about performance 59
- Page 10 and 11: tricky part of this conversion is t
- Page 12 and 13: prologues := 1 ; mpprocset := 1 ; W
- Page 14 and 15: code. However, the signicantly larg
- Page 16 and 17: \input zip:///somezipfile.zip?name=
- Page 18 and 19: ecause we could move all kind of po
- Page 20 and 21: If an acrhive is not a real TEX tre
- Page 22 and 23: luatools --generate This command wi
- Page 24 and 25: I know that it sounds a bit messy,
- Page 26 and 27: By combining Lua with TEX, we can d
- Page 28 and 29: 26 An example: CalcMath
- Page 30 and 31: collapser was written. In the (pre)
- Page 32 and 33: 30 Going utf
- Page 34 and 35: In practice we prefer a more abstra
- Page 36 and 37: of making that table accessible for
- Page 38 and 39: Therefore, it will be no surprise t
- Page 40 and 41: {"special","pdf: 1 0 0 rg"}, {"font
- Page 42 and 43: some neat feature of \TeX, once you
- Page 44 and 45: OPENTYPE 144 17.571.732 ENC 268 782
- Page 46 and 47: The font denition callback intercep
- Page 48 and 49: When processing a not so large le b
- Page 50 and 51: letter 105 i spacer 32 letter 116 t
- Page 52 and 53: egister 2 3190 dimen other_char 50
- Page 54 and 55: This example uses an active charact
- Page 56 and 57: function tex.printlist(data) callba
- Page 58 and 59: 56 Token speak
- Page 62 and 63: do end local function nextbyte(f) r
- Page 64 and 65: \ctxlua{collectgarbage("setstepmul"
- Page 66 and 67: Collecting tokens can take place in
- Page 68 and 69: where TEX has to convert to and fro
- Page 70 and 71: When experimenting with the new imp
- Page 72 and 73: It will be clear that here we colle
- Page 74 and 75: 72 Nodes and attributes t={ attr={
- Page 76 and 77: Let's end with an observation. Attr
- Page 78 and 79: end end tex.sprint("\\setbox0=\\hbo
- Page 80 and 81: 78 Dirty tricks
- Page 82 and 83: Those who read the previous chapter
- Page 84 and 85: In a similar way support for xml wi
- Page 86 and 87: Undoubtely this will have a huge im
- Page 88 and 89: 86 Going beta
- Page 90 and 91: We like testing LuaTEX's open type
- Page 92 and 93: k a el--rie am, ld p n th re, l ak
- Page 94 and 95: his or her font). As said, it also
- Page 96 and 97: ["tag"]="calt", } }, ["name"]="ks_l
- Page 98 and 99: When testing the implementation we
- Page 100 and 101: Mr. rmn p Of course we have to test
- Page 102 and 103: ه مَ ه لِ دْ
- Page 104 and 105: لاۤئِهٰٖا لاۤئِه
- Page 106 and 107: َ غْتَرِفٌُم عْتَر
- Page 108 and 109: وَْر وةَ، ٰ ي كَ
Interesting is that upto now I didn't really miss alternation but it may as well be that the<br />
lack <strong>of</strong> it drove me to come up with different solutions. For ConTEXt MkIV the MetaPost<br />
to pdf converter has been rewritten in Lua. This is a prelude to direct Lua output from<br />
MetaPost but I needed the exercise. It was among the rst Lua code in MkIV.<br />
Progressive (sequential) parsing <strong>of</strong> the data is an option, and is done in MkII using pure<br />
TEX. We collect words and compare them to PostScript directives and act accordingly.<br />
<strong>The</strong> messy parts are scanning the preamble, which has specials to be dealt with as well as<br />
lots <strong>of</strong> unpredictable code to skip, and the fshow command which adds text to a graphic.<br />
But real dirty are the code fragments that deal with setting the line width and penshapes<br />
so the cleanup <strong>of</strong> this takes some time.<br />
In Lua a different approach is taken. <strong>The</strong>re is an mp table which collects a lot <strong>of</strong> functions<br />
that more or less reect the output <strong>of</strong> MetaPost. <strong>The</strong> functions take care <strong>of</strong> generating the<br />
right pdf code and also handle the transformations needed because <strong>of</strong> the differences<br />
between PostScript and pdf.<br />
<strong>The</strong> sequential PostScript that comes from MetaPost is collected in one string and converted<br />
using gsub into a sequence <strong>of</strong> Lua function calls. Before this can be done, some<br />
cleanup takes place. <strong>The</strong> resulting string is then executed as Lua code.<br />
As an example:<br />
1 0 0 2 0 0 curveto<br />
becomes<br />
mp.curveto(1,0,0,2,0,0)<br />
which results in:<br />
\pdfliteral{1 0 0 2 0 0 c}<br />
In between, the path is stored and transformed which is needed in the case <strong>of</strong> penshapes,<br />
where some PostScript feature is used that is not available in pdf.<br />
During the development <strong>of</strong> LuaTEX a new feature was added to Lua: lpeg. With lpeg you<br />
can dene text scanners. In fact, you can build parsers for languages quite conveniently<br />
so without doubt we will see it show up all over MkIV.<br />
Since I needed an exercise to get accustomed with lpeg, I rewrote the mentioned converter.<br />
I'm sure that a better implementation is possible than I did (after all, PostScript is<br />
a language) but I went for a speedy solution. <strong>The</strong> following table shows some timings.<br />
gsub<br />
lpeg<br />
58 How about performance