Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model Test Failed #1

Open
ljchang opened this issue Jun 4, 2016 · 20 comments
Open

Model Test Failed #1

ljchang opened this issue Jun 4, 2016 · 20 comments
Labels

Comments

@ljchang
Copy link
Member

ljchang commented Jun 4, 2016

'Brain_Data' object has no attribute 'data'

@burnash burnash added the bug label Jun 6, 2016
@burnash
Copy link
Contributor

burnash commented Jun 6, 2016

It's a bug indeed. This is related to nltools issue: cosanlab/nltools#43

@ljchang
Copy link
Member Author

ljchang commented Jun 6, 2016

fixed the bug with nltools

@burnash
Copy link
Contributor

burnash commented Jun 7, 2016

Thanks! I've upgraded nltools and now it works.

@burnash burnash closed this as completed Jun 7, 2016
@ljchang
Copy link
Member Author

ljchang commented Jun 7, 2016

Hi Anton, I tested this on both the dev and production servers, and for some reason on the production server the tests don't seem to be working. The r values are 0, where they seem to be working correctly on the dev server. Did I test it too soon?

@ljchang ljchang reopened this Jun 7, 2016
@burnash
Copy link
Contributor

burnash commented Jun 7, 2016

No, this is not OK and you didn't test too soon.

The correlation values for your test in production are too small and rounded to 0:

r: 0.002890887232158625
id: 3393
name: pain ALE

r: -0.0011885438298766988
id: 3391
name:cognitive control ALE

r: 0.001298994551741561
id: 3392
name: negative affect ALE

Where the same test on dev gives following correlations:

r: 0.21919474646246498
id: 3393
name: pain ALE

r: 0.1107935112067798
id: 3391
name: cognitive control ALE

r: 0.12110203389266345
id: 3392
name: negative affect ALE

I'll track the source of this issue.

@ljchang
Copy link
Member Author

ljchang commented Jun 8, 2016

Not sure what's going on. It's pretty inconsistent. When I test it on some images it seems to work and on others it is zero. Let me know if you think it might have something to do with the similarity function and I can look into it more.

@ljchang
Copy link
Member Author

ljchang commented Jun 11, 2016

@burnash Have you noticed this happening on any of your tests? it seems like it is only happening for me on the shackman meta-analysis images at the moment.

@burnash
Copy link
Contributor

burnash commented Jun 11, 2016

No, I haven't noticed it yet, but now I'm trying to reproduce it locally with your input data

@burnash
Copy link
Contributor

burnash commented Jun 12, 2016

@ljchang very strange results so far. I can't reproduce zero correlation on my laptop. It gives me non zero correlation for pain ridge model tested on shackman meta-analysis. While I get zero for the same data in both production and dev. (the same nltools version in every environment.)

@ljchang
Copy link
Member Author

ljchang commented Jun 13, 2016

@burnash That is very strange. Also, I looked at older tests using that same dataset and it looked like it was working properly. It's the only dataset that seems to be exhibiting this behavior at the moment. It's also the one I use the most frequently to test the models.

I wonder if the data got corrupted on the server side cache. Is it possible to empty all of the neurovault data and see if it works if we redownload it? we could try it on the dev server first. Not sure if you've already tried this.

@burnash
Copy link
Contributor

burnash commented Jun 13, 2016

@ljchang that's right, I already emptied the cache, but with no result. I also noticed that prior tests (more than a month ago) didn't have zero correlation.

@ljchang
Copy link
Member Author

ljchang commented Jun 13, 2016

@burnash: Ok I think I figured it out. It looks like those particular images were updated on 3/27/15 by the authors and now they don't load properly using nltools for some strange reason. That explains why it used to work, but doesn't anymore and also why it is selective to this particular dataset. http://neurovault.org/collections/474/

Now on to the next problem - I have no idea why it isn't loading correctly. It works fine with nibabel.load(), but not Brain_Data()

dat = Brain_Data(glob.glob(os.path.join(base_dir,'*updated_*.nii.gz'))) dat.plot()

img1

@burnash
Copy link
Contributor

burnash commented Jun 13, 2016

@ljchang that's a really great finding! I did notice the word "updated" when I ran the test locally, but I paid no attention to it. But one strange thing remains: it still works fine on my laptop while fails on remote servers. I guess it could be related to nltools dependencies. (those without strict version requirement) I'm going to check if I have older nibabel locally.

@ljchang
Copy link
Member Author

ljchang commented Jun 13, 2016

Let me know what you find with the dependencies. I have a feeling it is going to be related to nilearn as that is the one that transforms the data from nibabel into Brain_Data()'s representation.

Here is the one on my laptop nilearn==0.2.5

This will be useful as it may help me figure out why it isn't loading correctly on my laptop if does turn out to be related to the version.

@burnash
Copy link
Contributor

burnash commented Jun 13, 2016

It's nilearn==0.1.3 on my laptop and nilearn==0.2.3 on dev and production

@ljchang
Copy link
Member Author

ljchang commented Jun 13, 2016

Ok, looks like that is probably the reason for the conflicting results, now we just need to figure out why it's not working on this dataset with the new version.

@ljchang
Copy link
Member Author

ljchang commented Jun 15, 2016

Should we go ahead with inviting the beta testers? I probably won't have time to figure out this problem for at least 2 weeks.

@burnash
Copy link
Contributor

burnash commented Jun 15, 2016

If nilearn issue is not blocking us I think so. The other interface improvement in the pipeline is a selection of values to classify. I think this is also not a blocker because I expect other more important issues to be raised during beta testing.

@ljchang
Copy link
Member Author

ljchang commented Jun 15, 2016

me too. I'll go ahead and send an email out today

@burnash
Copy link
Contributor

burnash commented Jun 15, 2016

Great, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants