Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

How a tracing JIT discovers and uses type information #24

Open
lukego opened this issue Jul 12, 2018 · 0 comments
Open

How a tracing JIT discovers and uses type information #24

lukego opened this issue Jul 12, 2018 · 0 comments

Comments

@lukego
Copy link
Owner

lukego commented Jul 12, 2018

Compilers need to know the types of variables in order to generate efficient code. This is easy in static languages like C because the types are hard-coded in the source code. It is harder in a dynamic language like Lua because any variable can have any type at runtime.

The RaptorJIT solution (inherited from LuaJIT) is to wait until the code actually runs, observe what types the variables actually have, and to speculate that the types will often be the same in the future. The compiler then generates machine code that is specialized for these types.

Consider this C code:

int add(int a, int b) {
    return a + b;
}

This is easy to compile because at runtime the values of a, b, and the result will always be a machine integer. Here is the generated code with gcc -O3:

0000000000000000 <add>:
   0:	8d 04 37             	lea    eax,[rdi+rsi*1]
   3:	c3                   	ret

Consider this Lua code now:

function add(a, b)
    return a + b
end

This is different because a and b could be any kind of Lua value at runtime: numbers, tables, strings, etc. The result depends on these types and it might be a value or an exception. If the compiler would generate machine code that is prepared for every possible type then this will be hopelessly general, similar to an interpreter.

So the compiler waits until the function is actually called and then generates machine code that is specialized for the types it observes.

If we call the add function in this loop:

local sum = 0

for i = 1, 100 do
   sum = add(sum, i)
end

print("sum is "..sum)

Then the compiler will observe that the values are initially numbers, speculate that the values will often be numbers in the future too, and generate code that is specialized for numbers and uses machine arithmetic instructions.

Suppose we later call the same add function with an object that overloads the + operator to collect values in an array:

local arr = {}
setmetatable(arr, { __add = function (arr, x) table.insert(arr, x) end })

for i = 1, 100 do
   sum = add(arr, i)
end

print("arr is "..#arr.." elements")

Then the compiler will generate an entirely new block of machine code in which the implementation of add is specialized for the a argument being exactly this kind of object: a table that overloads the + operator to insert the b value into an array. This generated code will be much different to the numeric code that was compiled previously.

So a static compiler optimizes for the special case by excluding the possibility that any other cases can occur at runtime. The tracing JIT waits until runtime, observes which types are really being used while the program runs, and then optimizes based on the speculative assumption that the types will tend to be the same over the lifetime of the program.

That is how a tracing JIT discovers and uses type information.

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

1 participant