Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tokenization for query marker #351

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sleep-ee
Copy link

Description
This pull request addresses the issue of inconsistent tokenization when prepending a query marker to the input text. The current implementation appends ". " at the beginning of each string and then replaces it with a query marker. This approach can lead to inconsistent behavior because different tokenizers might handle the punctuation and spaces differently, resulting in superfluous IDs within the tokenized output.

To resolve this issue, the changes ensure that the query marker is directly added to the beginning of each string before tokenization. This method avoids the potential inconsistency by ensuring that the query marker is always tokenized as a single token.

Changes
Modified the tensorize method in doc_tokenizer.py and query_tokenizer.py to prepend the query marker directly to each string in batch_text.
Included a utility to test if the query marker tokenizes into a single token, ensuring consistency across different tokenizers.

Related Issues
Fixes issue #346 .

@NohTow
Copy link
Contributor

NohTow commented Jun 24, 2024

The problem with this code is that the marker tokens are not tokenized correctly.
For the ColBERTv2 model, the query/doc tokens are respectively [unused0] and [unused1], with corresponding ids of 2 and 3.

With this code, the resulting [CLS] + marker results in [101, 1031, 15171, 2487, 1033], i.e [CLS] [ unused1 ] instead of [101, 2], i.e [CLS] [unused1].

@robro612
Copy link
Contributor

@NohTow I'll pick this up if you don't mind, we @jinaai need to fix the tokenizer behavior so that it works properly on our upcoming model too, it currently leaves an extra punctuation token when we do the replacement.

@NohTow
Copy link
Contributor

NohTow commented Aug 13, 2024

@robro612 sure!
FYI, I fixed this in our library but did not have time to make a proper PR here because I need to spin up the repo to adjust the code and make some tests, but here is the code.

I created a dedicated function to add a marker in the second position of the tensor:

def insert_prefix_token(self, tensor, prefix_id):
         prefix_tensor = torch.full(
             (tensor.size(0), 1), prefix_id, dtype=tensor.dtype, device=tensor.device
         )
         return torch.cat([tensor[:, :1], prefix_tensor, tensor[:, 1:]], dim=1)

This function is then called with a different id for the query/document, and also used for the attention mask and the eventual token_type_id (if the model requires one, such as bert-base):

            features["input_ids"] = self.insert_prefix_token(
                features["input_ids"], self.query_prefix_id
            )
            # Update the attention mask to account for the new token
            features["attention_mask"] = self.insert_prefix_token(
                features["attention_mask"], 1
            )
            if "token_type_ids" in features:
                features["token_type_ids"] = self.insert_prefix_token(
                    features["token_type_ids"], 0
            )

This is not the most elegant way of doing it, but at least it is robust and since the tokenization is done on the CPU, the extra memory usage due to the new tensor creation is not an issue.
Also, the function is only required for input_ids, preprinting can be used for attention_mask and token_type_ids (but at least this is more readable). Hope it helps.

@robro612
Copy link
Contributor

Thanks Antoine, that'll be helpful in making the PR here. I'll have to remember to change the tokenizer's max length to be 1 less so that when we now add the marker post-tokenization it makes the tensors [bsize, max_{q,d}_len] as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants