Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Implement our own PRNG for rayon-core #571

Merged
merged 2 commits into from
Jun 5, 2018
Merged

Conversation

cuviper
Copy link
Member

@cuviper cuviper commented Apr 30, 2018

Our need for randomness is very simple, just to spread the work-stealing
somewhat equally across all possible victims. Rather than worrying
about compatibility and updates to the rand crate, we now implement a
simple xorshift* generator for this. We keep rand only as a dev-
dependency for testing purposes.

Closes #570.

Our need for randomness is very simple, just to spread the work-stealing
somewhat equally across all possible victims.  Rather than worrying
about compatibility and updates to the `rand` crate, we now implement a
simple `xorshift*` generator for this.  We keep `rand` only as a dev-
dependency for testing purposes.
Copy link
Member

@nikomatsakis nikomatsakis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, left a few nits.

// we're only calling this once from each new thread. The address is also shift-XORed to
// give us some high bits even on 32-bit platforms.
let base = 0;
let addr = &base as *const _ as u64;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. I'm not crazy about this technique for getting a pointer into the stack. It feels fragile in the face of potential Rust unsafe code guidelines — e.g., I could sort of imagine base getting promoted to read-only memory or something. But probably we wouldn't be able to get away with such a rule anyway. It might be nice to have some kind of intrinsic for "random stack address", though — this is something I want to do from time to time. Anyhow I suppose it's fine "as is" for now, I filed https://github.com/nikomatsakis/rust-memory-model/issues/47 to track it and ponder it more deeply. =)

// give us some high bits even on 32-bit platforms.
let base = 0;
let addr = &base as *const _ as u64;
let seed = addr ^ (addr << 32);
Copy link
Member

@nikomatsakis nikomatsakis May 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: can we add a assert!(seed != 0); just as a failsafe? (or debug_assert!)

rng.next_u32() % num_threads as u32
} as usize;

let start = self.rng.next() as usize % num_threads;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: since we control rng, why not add a nice wrapper function like self.rng.next_usize(num_threads) that hides all this modulo stuff?

@cuviper
Copy link
Member Author

cuviper commented Jun 4, 2018

OK, now I'm seeding with a hashed global counter. The counter gives us a unique start to each PRNG, and hashing that gives us a heap of independent bits. This will be somewhat reproducible between runs (modulo thread startup races), but we don't really care about that.

@nikomatsakis
Copy link
Member

bors r+

bors bot added a commit that referenced this pull request Jun 5, 2018
571: Implement our own PRNG for rayon-core r=nikomatsakis a=cuviper

Our need for randomness is very simple, just to spread the work-stealing
somewhat equally across all possible victims.  Rather than worrying
about compatibility and updates to the `rand` crate, we now implement a
simple `xorshift*` generator for this.  We keep `rand` only as a dev-
dependency for testing purposes.

Closes #570.

Co-authored-by: Josh Stone <cuviper@gmail.com>
@bors
Copy link
Contributor

bors bot commented Jun 5, 2018

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants