Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Removes dependence on boost::dynamic_bitset and fixes test failure on Windows #108

Conversation

sherm1
Copy link
Member

@sherm1 sherm1 commented Mar 26, 2016

This PR changes the bitset specialization of morton_functor to use std::bitset instead of boost::dynamic_bitset. The number of bits must now be set at compile time but there doesn't seem to be any reason for run time sizing, and it wasn't being used anywhere or tested.

When this is merged there will no longer be any boost dependency in the fcl code. However, the tests are still using the boost test framework.

This PR also fixes a Windows test failure reported in #107. That reduces the number of Windows failures from 4 to 3.


/// @brief Functor to compute N bit morton code for a given AABB
/// N must be a multiple of 3.
template<int N>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is rather nitpicky, but maybe change to template<std::size_t N>?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not nitpicky -- you are right! It should have had the same type as the template argument for std::bitset. Will fix.

@mamoll
Copy link
Member

mamoll commented Mar 26, 2016

Looks good to me. Any idea why the gcc builds fail on Travis? Is gcc 4.8 perhaps too old?

@sherm1
Copy link
Member Author

sherm1 commented Mar 27, 2016

Any idea why the gcc builds fail on Travis?

Looks like it didn't match the bitset template argument to the corresponding specialization. I'll investigate.

@sherm1
Copy link
Member Author

sherm1 commented Mar 27, 2016

@mamoll -- I fixed the template argument. While investigating the compile problem gcc 5 gave me some helpful warnings that uncovered a problem with the new definitions for the fixed-size int types (see commit message). With those two fixes, the build succeeds on gcc 4.8, clang 3.5, and VC++ 2015 (although there are still 3 test failures on Windows).

Please re-review.

typedef std::int64_t FCL_INT64;
typedef std::uint64_t FCL_UINT64;
typedef std::int32_t FCL_INT32;
typedef std::uint32_t FCL_UINT32;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good thing you caught the backwards definition! That was my bad. According to http://en.cppreference.com/w/cpp/types/integer std::int64_t et al. are an optional part of the C++11 standard and are thus not guaranteed to be defined, whereas the fast versions are. What problems occur if there are too many bits?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::int64_t et al. are an optional part of the C++11 standard ...

They are only optional on platforms that can't provide an exactly 32 & 64 bit 2's complement type. According to C++11 7.20.1.1/3 (quoting from stackoverflow):

However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names.

A more general solution would be to use the "least" types like std::int_least32_t which will be exactly the specified size if it is available, otherwise the next larger size. The problem with that IMO is that to me the 32 in FCL_INT32 implies exactly 32 bits by analogy with int32_t. I would rename it FCL_INT_LEAST32 if that was what it meant. Actually I would prefer just to remove these macros altogether and use the standard typedefs instead (in another PR).

What problems occur if there are too many bits?

That would depend on the usage. I believe in the Morton case, the 32 & 64 bit specializations were intended to provide compact storage, so choosing a 64 bit representation for the 32 would be odd (even if that were the fastest type). I would just like the integer size dependencies to be clear. Currently this is kind of a mess in FCL and I'm wondering whether that may account for some of the failures I'm seeing on Windows.

Here are some related discussions: Google style guide, Khronos issue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What a mess indeed. I am satisfied that your suggested change is the best (or least bad) choice.

@mamoll mamoll merged commit 7133432 into flexible-collision-library:master Mar 27, 2016
@jslee02 jslee02 added this to the FCL 0.5.0 milestone Mar 27, 2016
@sherm1 sherm1 deleted the get-rid-of-boost-dynamic_bitset branch March 28, 2016 00:53
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants