Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

la: fix matrix multiplication #198

Closed

Conversation

Eliyaan
Copy link
Contributor

@Eliyaan Eliyaan commented Feb 2, 2024

Fix #197
(I'm not sure that this PR is correct but it fixes the issue)

Summary by CodeRabbit

  • Refactor
    • Adjusted parameters for matrix and vector multiplication functions to enhance data processing efficiency.

Copy link

coderabbitai bot commented Feb 2, 2024

Walkthrough

The recent updates to the la/blas.v file address critical issues in matrix and vector multiplication functions, focusing on the accuracy of array dimensions and strides. These adjustments ensure that operations like dgemv and dger now correctly handle large matrices without leading to dimension errors, directly targeting the problems outlined in the linked GitHub issue.

Changes

File Change Summary
la/blas.v Adjusted parameters for dgemv and dger functions, specifically array dimensions and strides.

Assessment against linked issues

Objective Addressed Explanation
Bug with matrix multiplication dimensions (#197)
Matrix multiplications should work without errors (#197)
Error with the leading dimension of matrix A in dgemv (#197)
Suggested checks in dgemv might be incorrect (#197)

Poem

🐰 Code hops through fields of code,
Fixing bugs where they showed.
With every line, and every tweak,
Ensures the math no longer bleak.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 5d03de9 and db328a3.
Files selected for processing (1)
  • la/blas.v (5 hunks)
Additional comments: 11
la/blas.v (11)
  • 139-140: The call to vlas.dgemv correctly updates the array dimensions and strides for the matrix-vector multiplication. However, ensure that the a.data array passed to arr_to_f64arr[T] is correctly converted to a []f64 array, as this is crucial for the correctness of the operation. The stride parameters and the leading dimension (a.n) seem correctly set according to the standard dgemv usage.
  • 171-171: Similar to the dgemv call in matrix_vector_mul, the dgemv call in matrix_tr_vector_mul appears to correctly handle the transpose operation by setting the first parameter to true. Again, verify the correct conversion of a.data and u to []f64 arrays and that the stride and leading dimension are set appropriately.
  • 203-203: The vlas.dger function call for performing the vector-vector transpose multiplication (vector_vector_tr_mul) correctly sets the dimensions, alpha scaling factor, and strides. Ensure that the input vectors u and v are correctly converted to []f64 arrays. This operation is sensitive to the correct dimensionality of the input arrays, so double-checking the conversion logic in arr_to_f64arr[T] is recommended.
  • 243-244: The vlas.dgemm call in matrix_matrix_mul correctly sets the parameters for matrix-matrix multiplication without transposition. It's important to ensure that a.data and b.data are correctly passed as []f64 arrays. The leading dimensions (a.m for a and b.m for b) are correctly used, aligning with the dgemm function's expectations.
  • 263-264: In matrix_tr_matrix_mul, the vlas.dgemm call correctly handles the transposition of matrix a by setting the first transposition parameter to true. As with other dgemm calls, verify the correct data type conversion and that the leading dimensions are set correctly, considering the transposition.
  • 272-273: The vlas.dgemm call in matrix_matrix_tr_mul correctly accounts for the transposition of matrix b by setting the second transposition parameter to true. Ensure the data arrays are correctly typed and that the leading dimensions reflect the transposed dimensions of b.
  • 281-282: For matrix_tr_matrix_tr_mul, both matrices a and b are transposed as indicated by both transposition parameters being set to true. Verify the correctness of array conversions and that the leading dimensions are appropriately set for the transposed matrices.
  • 290-291: The vlas.dgemm call in matrix_matrix_muladd correctly performs matrix multiplication with addition. The parameters, including the alpha scaling factor and the addition of the result to c.data, are set correctly. Ensure the data type conversions and leading dimensions are accurate.
  • 299-300: In matrix_tr_matrix_muladd, the transposition of matrix a is correctly handled. The addition of the result to c.data is also correctly implemented. As with other operations, verify the correctness of the data type conversions and the setting of leading dimensions.
  • 308-309: The vlas.dgemm call in matrix_matrix_tr_muladd correctly handles the transposition of matrix b and the addition of the result to c.data. Ensure the correctness of the data type conversions and the leading dimensions, especially considering the transposition of b.
  • 317-318: For matrix_tr_matrix_tr_mul_add, both matrices a and b are transposed, and the result is added to c.data. Verify the correctness of the data type conversions and that the leading dimensions are set correctly for the transposed matrices.

@ulises-jeremias
Copy link
Member

@Eliyaan I think this PR is not correct. Are your matrices row major? will test it later

@ulises-jeremias
Copy link
Member

I just fixed it on master along some other fixes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Problem with matrix multiplication dimentions
2 participants