Skip to content
Pat Gunn edited this page Nov 28, 2017 · 5 revisions

Comparison

We want to improve CaImAn, this will be done by carefully storing and comparing the results of CaImAn to human defined standards ( labeling of neurons ) and to previous iterations of CaImAn.

To this end, code will be tested before it lands in the codebase.

Travis and test

Travis is a continuous integration framework used to test the code using predefined tests. One such test is the comparison function The intent of this is to allow developers an easy look at the effects of their proposed changes to detect improvement or regression compared to the upstream version of CaImAn.

Developers can test the code themselves using the nosetest package

Important information

the ground truth is the dataset to which your output result will be compared to.

Outside developers should not modify the ground truth or the parameters of the testing function without coordination with central developers.

Outside developers that know their changes will introduce differences in output (that are helpful) should call the nosetest function and find their comparison results inside the tests/comparison/tests folder. Then rename the X(number) created folder as "tosend" for it to be sent to the master developers when doing a pull request.

More information

More information is present about this in the documentation of the code.

  • A mind map of how the data is stored is present inside the comparison folder.

  • the README explains how to compare your algorithm

  • look at the function in the tests and comparison folders.

Investigative comparison

There is an "investigative comparison" notebook designed to help compare differing results using the bokeh library.

You will have to re-run the comparison pipeline using the notebook. As an example, you can generate a groundtruth file from a previous version of CaImAn that you like and know well and compare it to a new version of CaImAn and investigate the differences in results.

Create your ground truth

You can generate your own ground truth by using the already implemented: create_gt.py function

  1. select your desired parameters inside the params dictionary of the function
  2. keep the old ground truth somewhere
  3. run the function and then you have your new ground truth.