Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Single precision floats #217

Open
dselivanov opened this issue Jun 12, 2019 · 2 comments
Open

Single precision floats #217

dselivanov opened this issue Jun 12, 2019 · 2 comments

Comments

@dselivanov
Copy link

It would be useful to add single precision floats to the package. The idea is that we can use R's integer vectors (always 32 bit) to store float32 numbers (same done in https://github.com/wrathematics/float pkg).

I'm not that familiar with codebase for rray, so I'm curious on how much effort do you think it will take? My hypothesis is that it should not take too much - xtensor is templated library, so in theory most of the code for double could be reused.

@DavisVaughan
Copy link
Member

I think the limitation might first happen with the xtensor-r bindings. I think xtensor should be able to handle it since it is templated, but xtensor-r checks that the input is one of a few special cased types before creating the xtensor object https://github.com/QuantStack/xtensor-r/blob/master/include/xtensor-r/rcontainer.hpp#L144

@dselivanov
Copy link
Author

Well, int32_t should be fine. The issue is how to reinterpret int32_t* as float*. If we could check some attribute of R object in order to understand that it is actually float32..

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants