Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very bad training results #1013

Open
yuyunlong2002 opened this issue Oct 14, 2024 · 3 comments
Open

Very bad training results #1013

yuyunlong2002 opened this issue Oct 14, 2024 · 3 comments

Comments

@yuyunlong2002
Copy link

I produced very poor results when running Gaussian_stlatting, not only on my own dataset, but also on the MipNeRF360 scenes tandt_db provided by the training project.
There is a significant gap between my training results and the results in the pre-trained model provided by the project, I don't know where the problem lies, I haven't changed any hyperparameters at all. The hyperparameter settings are the same as on the website:

        self.position_lr_init = 0.00016
        self.position_lr_final = 0.0000016
        self.position_lr_delay_mult = 0.01
        self.position_lr_max_steps = 30_000
        self.feature_lr = 0.0025
        self.opacity_lr = 0.05
        self.scaling_lr = 0.005
        self.rotation_lr = 0.001
        self.percent_dense = 0.01
        self.lambda_dssim = 0.2
        self.densification_interval = 100
        self.opacity_reset_interval = 3000
        self.densify_from_iter = 500
        self.densify_until_iter = 15_000
        self.densify_grad_threshold = 0.0002
        self.random_background = False
        super().__init__(parser, "Optimization Parameters")

I don't know if my hardware didn't meet the requirements, but I feel like the result shouldn't be so bad, my hardware is as follows:

GPU: Nvidia RTX2070super 8GB
RAM: 16 GB

My reconstruction results are compared with the pre trained results as follows:
My Result:
1
Pre-trained:
2

The observation results showed that there was a significant difference in the initial state of the splats, and as the splashes increased, *many large white spots appeared in the results.
And I also found that there is a significant difference in memory size between point.cly in my iteration30000 and point.cly in pre trained iteration30000:
1

Not only is there an issue in this particular scenario, but all demo scenarios and my own datasets (train、trucks、playroom……)currently have the same problem, Can someone tell me where the problem lies?

@o-o-cloud
Copy link

I encountered the same problem during my experiment yesterday!
wrong

@yuyunlong2002
Copy link
Author

@o-o-cloud hello,I was lucky enough to solve this problem after sending it out last night,I ran Gaussian Splatting again using Colab and found that the final result was good. I hope this can help you solve this problem.
The problem is likely due to hardware issues with the computer, but I am still unsure why the hardware is affecting the training results

@o-o-cloud
Copy link

o-o-cloud commented Oct 15, 2024

@yuyunlong2002 Thank you for your enthusiastic help, but I think my hardware is enough.
RTX40 series
I trained under ubuntu and visualized under windows

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants