Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock-free BlockingIdentifierGenerator LoHi #1610

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

franz1981
Copy link

@franz1981 franz1981 commented Apr 24, 2023

This is a version that pay a single allocation cost per each blockSize to save any synchronized to happen.
It can be made gc-free by playing some bit tricks packing both lo/hi into a long (and assuming there are no negative values for both, likely) - or with some more complex coordination mechanism (yet to be designed).

I don't know few things yet before un-drafting:

  1. If many concurrent state.next(hi) happen do we need to enforce validation of (hi,lo) based on existing values? new hi > old hi (or diff by blockSize?) && old low >= blockSize.
  2. If many concurrent state.next(hi) happen do we need to ensure some ordering of updates? eg concurrent next(10), next(20),next(30) updates in this pr are not yet guaranteed to be set in any specific order and I cannot see any ordering enforced in the original code, but better be safe then sorry with concurrency
  3. blockSize is a constant? I'm assuming it is...

private volatile LoHi loHi;

public long hi(long hi) {
loHi = new LoHi(hi);
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Sanne
This could be made much more strict and based on the previous value while failing if it happens concurrently too (by using a compareAndSet)

@franz1981 franz1981 closed this Apr 24, 2023
@franz1981 franz1981 reopened this Apr 24, 2023
@franz1981
Copy link
Author

franz1981 commented Apr 24, 2023

@Sanne
@DavideD
I'm a bit worried about the original code:

see

		final long next = next();
		if ( next != -1 ) {
			return CompletionStages.completedFuture( next );
		}
                if ( getBlockSize() <= 1 ) {
			return nextHiValue( connectionSupplier )
					.thenApply( i -> next( i ) );
		}

What happen if 2 concurrent threads both see -1 because they both exhausted the (same) current sequences?
They both compete to query the database to acquire new ones and compete (again) to update hi.
I think this won't end up well unless there's some missing assumption I don't know about the way things are executed here...

And the same can happen with a combiner as well: while within the combiner you can have more then one updates in flights, each one competing to both hit the DB and refresh the new hi.


private static final AtomicIntegerFieldUpdater<LoHi> LO_UPDATER = AtomicIntegerFieldUpdater.newUpdater(LoHi.class, "lo");
private final long hi;
private volatile long lo;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be an int ? Especially as you're using a AtomicIntegerFieldUpdater ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's for 2 reasons: it won't change based on JOL and will make life easier due to the "blind" getAndIncrement (which scale much buffer then a compare and set loop).

@Sanne
Copy link
Member

Sanne commented Apr 24, 2023

I'd challenge you in ever noticing the current synchronized on profiling data :)

@franz1981
Copy link
Author

franz1981 commented Apr 24, 2023

I'd challenge you in ever noticing the current synchronized on profiling data :)

eheh I know, but it won't cost anything to make it lock free like this I think, I'm more concerned instead by this -> #1610 (comment)

@franz1981
Copy link
Author

To quote myself of the future (and @Sanne ) re #1610 (comment)

this is not a "real" problem leading to break anything, but just an efficiency one; in short, if more then one concurrently try to increase the sequence due to exhaustion and succeed, but update hi/lo in the "wrong" order (ie by creating gaps or "jumping forward" before previously existing ones will be "consumed") it can just cause the consumers to request to the DB more sequences then necessary (again) because in both cases (missing new or "old" created batches) would still cause someone to be left unsatisfied.
The existing tests are not verifying this hence there was no reason to noticed that.

@franz1981 franz1981 marked this pull request as ready for review April 24, 2023 10:27
@franz1981
Copy link
Author

@Sanne undrafting, but feel free to pick what you like for next works on this bud ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants