Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large array transfer problem #66

Open
Lion-tang opened this issue May 13, 2022 · 3 comments
Open

Large array transfer problem #66

Lion-tang opened this issue May 13, 2022 · 3 comments
Assignees

Comments

@Lion-tang
Copy link

Env

Two Ubuntu20.04 LTS clients for Vmware

Description

The two clients can perform RDMA large array operations separately by binding 127.0.0.1, However, when one client performs RDMA operations to the other on a large array, no packets are sent, I used Wireshark but did not catch packets, Operating with a small array can catch packets.

By the way, for arrays that exceed the size of the ulimit-L parameter, I do not need to unlock the limited memory size permissions for RDMA on a single client. When two clients perform RDMA operations, the large array needs to unlock the limit of Ulimit-L in order to connect

the two clients rust code is as following:

// server
use async_rdma::{LocalMrReadAccess, LocalMrWriteAccess, Rdma, RdmaListener};
use portpicker::pick_unused_port;
use std::{
    alloc::Layout,
    io,
    net::{Ipv4Addr, SocketAddrV4},
    time::Duration,
};

const LEN:usize = 30 * 1024 * 1024;

async fn server(addr: SocketAddrV4) -> io::Result<()> {
    let rdma_listener = RdmaListener::bind(addr).await?;
    let rdma = rdma_listener.accept(1, 1, 512).await?;
    // receive the immediate writen by client
    let imm_v = rdma.receive_write_imm().await?;
    //  print the immdedaite value
    print!("immediate value: {}", imm_v);
    // receive the metadata of the mr sent by client
    let lmr = rdma.receive_local_mr().await?;
    // print the content of lmr, which was `write` by client
    unsafe { println!("{:?}", *(lmr.as_ptr() as *const [i32;LEN])) };
    // wait for the agent thread to send all reponses to the remote.
    tokio::time::sleep(Duration::from_secs(1)).await;
    Ok(())
}
#[tokio::main]
async fn main() {
    let addr = SocketAddrV4::new(Ipv4Addr::new(192, 168, 59, 131), 8081);
    server(addr).await.unwrap();
    tokio::time::sleep(Duration::from_secs(3)).await;
}


// client
use async_rdma::{LocalMrReadAccess, LocalMrWriteAccess, Rdma, RdmaListener};
use portpicker::pick_unused_port;
use std::{
    alloc::Layout,
    io,
    net::{Ipv4Addr, SocketAddrV4},
    time::Duration,
};


const LEN:usize = 30 * 1024 * 1024;

async fn client(addr: SocketAddrV4) -> io::Result<()> {
    let rdma = Rdma::connect(addr, 1, 1, 512).await?;
    let mut lmr = rdma.alloc_local_mr(Layout::new::<[i32;LEN]>())?;
    let mut rmr = rdma.request_remote_mr(Layout::new::<[i32;LEN]>()).await?;
    // load data into lmr
    unsafe { *(lmr.as_mut_ptr() as *mut [i32;LEN]) = [0;LEN] };
    // write the content of local mr into remote mr  with immediate value
    rdma.write_with_imm(&lmr, &mut rmr, 1).await?;
    // then send rmr's metadata to server to make server aware of it
    rdma.send_remote_mr(rmr).await?;
    Ok(())
}
#[tokio::main]
async fn main() {
    let addr = SocketAddrV4::new(Ipv4Addr::new(192, 168, 59, 131), 8081);
    client(addr)
        .await
        .map_err(|err| println!("{}", err))
        .unwrap();
    tokio::time::sleep(Duration::from_secs(3)).await;
}
@GTwhy
Copy link
Collaborator

GTwhy commented May 13, 2022

@Lion-tang
Hi, I will try it, and you are welcome to update if you find anything else.

@GTwhy
Copy link
Collaborator

GTwhy commented May 20, 2022

Hey @Lion-tang , I've tried your demo in different VMs and everything seems to be working well.
I even change LEN to 300 * 1024 * 1024 and didn't have any problems.
The only thing I've changed is IP address. Maybe you can change the IP in your server demo to 0.0.0.0 like:

    let addr = SocketAddrV4::new(Ipv4Addr::new(0, 0, 0, 0), 8081);

@Lion-tang
Copy link
Author

thanks, i will test the case like you mentioned, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants