-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KLB file in pipeline ? #30
Comments
Hi Xqua, one way to approach this is to write everything into .tif. The .tif files you can then use with the workflow without modifications of it. Another approach would be to write everything into .hdf5. Cheers, |
hi @schmiedc Sure, that would be an option ... but I feel like this is probably not the best option though ... It would require a lot more HDD space than I have lying around. I'll give it a try and pop back here if I need any help ! EDIT: |
Hi Xqua, it is true that the .tif option creates more data and should be avoided if possible. However I am not sure if your proposed solutions can work at all. But there is another, albeit smaller catch. My workflow depends on an old Fiji version. So are you really sure that the Fiji version I provided is able to load and resave your data? Cheers, |
hum I see your point about versions ! The problem I can see indeed is that the version for the snakemake pipeline is from 2015 which did not have yet support for this. But this might not be problematic is the workflow runs on the newer version ? I haven't tried though ... The easiest solution would be this one:
that way there is minimal changes and interference with the rest of the code, and it starts from a similar points, one klb file per timepoint, easily splitable into multiprocessing. Would you mind if such a script runs in python, called by a bsh script ? instead of imageJ ? |
Hi Xqua, sorry for my belated reply. The issue is how the workflow interacts with Fiji. Due to these reasons it is very unlikely that a new Fiji version is working seamlessly with the workflow. This is compounded by the fact that one needs to execute these steps, write them in a macro. Then compare the string to the string the workflow uses and then change the string assembly in the beanshell. I am currently undergoing a job change so I will not have time in the next couple of months. The proposed changes specifically would only work if the next following workflow steps also accept the KLB files. So one would need to add this functionality and make sure that all the steps in the workflow is working with a recent Fiji version. I see the need for updating the workflow but I cannot promise that I will have time soon for a major update. Cheers, |
there is an updated Fiji version available for the workflow: tested and implemented by a group in Ostrava: #33 This Fiji version implements the following new format: It should be relatively easy to adjust the define_czi.bsh script then. Cheers, |
hi @schmiedc I'll look into it as I have new datasets coming along. but TBH I've moved on to use google cloud VM with FIJI and the GUI to run all of my datasets... It was just more time efficient than fighting against cluster madness ! |
Hi,
Any trick or help towards making this available for KLB files ?
That would be of great help !
Thanks !
The text was updated successfully, but these errors were encountered: