Skip to content

Commit

Permalink
add web page
Browse files Browse the repository at this point in the history
  • Loading branch information
zhoumu53 committed Sep 24, 2023
1 parent c4b59dc commit 6234d6b
Show file tree
Hide file tree
Showing 5 changed files with 240 additions and 0 deletions.
240 changes: 240 additions & 0 deletions docs/root/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
<!doctype html>
<html lang="en">

<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">

<!-- Primary Meta Tags -->
<title>Learnable latent embeddings for joint behavioural and neural analysis</title>
<meta name="title" content="Learnable latent embeddings for joint behavioural and neural analysis">
<meta name="description" content="Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modeling neural dynamics during adaptive behaviors to probe neural representations. In particular, neural latent embeddings can reveal underlying correlates of behavior, yet, we lack non-linear techniques that can explicitly and flexibly leverage joint behavior and neural data. Here, we fill this gap with a novel method, CEBRA, that jointly uses behavioural and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species. It allows for single and multi-session datasets to be leveraged for hypothesis testing or can be used label-free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, and rapid, high-accuracy decoding of natural movies from visual cortex.">

<!-- Open Graph / Facebook -->
<meta property="og:type" content="website">
<meta property="og:url" content="https://cebra.ai/">
<meta property="og:title" content="Learnable latent embeddings for joint behavioural and neural analysis">
<meta property="og:description" content="Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modeling neural dynamics during adaptive behaviors to probe neural representations. In particular, neural latent embeddings can reveal underlying correlates of behavior, yet, we lack non-linear techniques that can explicitly and flexibly leverage joint behavior and neural data. Here, we fill this gap with a novel method, CEBRA, that jointly uses behavioural and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species. It allows for single and multi-session datasets to be leveraged for hypothesis testing or can be used label-free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, and rapid, high-accuracy decoding of natural movies from visual cortex.">
<meta property="og:image" content="">

<!-- Twitter -->
<meta property="twitter:card" content="summary_large_image">
<meta property="twitter:url" content="https://cebra.ai/">
<meta property="twitter:title" content="Learnable latent embeddings for joint behavioural and neural analysis">
<meta property="twitter:description" content="Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modeling neural dynamics during adaptive behaviors to probe neural representations. In particular, neural latent embeddings can reveal underlying correlates of behavior, yet, we lack non-linear techniques that can explicitly and flexibly leverage joint behavior and neural data. Here, we fill this gap with a novel method, CEBRA, that jointly uses behavioural and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species. It allows for single and multi-session datasets to be leveraged for hypothesis testing or can be used label-free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, and rapid, high-accuracy decoding of natural movies from visual cortex.">
<meta property="twitter:image" content="">

<!-- Bootstrap CSS -->
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-1BmE4kWBq78iYhFldvKuhfTAU6auU8tT94WrHftjDbrCEXSU1oBoqyl2QvZ6jIW3" crossorigin="anonymous">

<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>

<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Sans+Condensed&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono&display=swap" rel="stylesheet">

<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.13.1/css/all.min.css" rel="stylesheet">

<style>

:root {
--cebra-c: #1D29B8;
--cebra-e: #6235E0;
--cebra-b: #A045E8;
--cebra-r: #BF1BB9;
--cebra-a: #D4164F;
}

.main {
font-family: 'IBM Plex Sans Condensed', sans-serif;
color: gainsboro;
}

.container-fluid .col {
width: 100%;
padding-left: 0;
padding-right: 0;
}

.code {
font-family: 'IBM Plex Mono', monospace;
}

h3 {
color: var(--cebra-r);
}

a {
color: var(--cebra-r);
font-family: 'IBM Plex Sans Condensed', sans-serif;
}

a:hover {
color: var(--cebra-b);
}

.muted-link {
color: #BF1BB9;
}

.paper-thumbnail {
background-color: white;
border-radius: 5%;
}
</style>

<title>CEBRA</title>
</head>

<body style="background-color: rgb(0, 0, 0);">
<div class="container-fluid d-flex flex-column main">
<div class="row">
<div class="col-md-2">
</div>
<div class="col-md-8" id="main-content">
<div class="row text-center my-5" id="#">
<h1>Rethinking pose estimation in crowds: overcoming the detection information bottleneck and ambiguity</h1>
</div>

<!-- Begin author list-->
<div class="row text-center mb-4">
<div class="col-md-3 mb-4"></div>
<div class="col-md-2 mb-4">
Mu Zhou*</br>
EPFL
</div>
<div class="col-md-2 mb-4">
Lucas Stoffl*</br>
EPFL
</div>
<div class="col-md-2 mb-4">
Mackenzie W. Mathis</br>
EPFL
<a href="https://www.mackenziemathislab.org/mackenziemathis" target="_blank"><i class="fas fa-link"></i></a>
</div>
<div class="col-md-2 mb-4">
Alexander Mathis</br>
EPFL
<a href="https://https://www.mathislab.org/" target="_blank"><i class="fas fa-link"></i></a>
</div>
</div>
<!-- End author list-->

<div class="row text-center">
<div class="col-md-2 mb-4"></div>
<div class="col-sm-2 mb-2"></div>
<div class="col-sm-2 mb-2">
<h4>
<a href="https://github.com/amathislab/BUCTD" target="_blank"> <i class="fab fa-github"></i>
Code
</a>
</h4>
</div>
<div class="col-sm-2 mb-2">
<h4>
<a href="https://arxiv.org/abs/2306.07879" target="_blank">
<i class="fas fa-file-alt"></i>
Paper
</a>
</h4>
</div>
</div>

<div class="row pt-4">
<h3>
<i class="fas fa-file"></i>
Abstract
</h3>
</div>

<div class="row">

</div>
<div class="row">
<p>
Frequent interactions between individuals are a
fundamental challenge for pose estimation algorithms.
Current pipelines either use an object detector together
with a pose estimator (top-down approach), or localize
all body parts first and then link them to predict the
pose of individuals (bottom-up). Yet, when individuals
closely interact, top-down methods are ill-defined due
to overlapping individuals, and bottom-up methods often
falsely infer connections to distant body parts. Thus,
we propose a novel pipeline called bottom-up conditioned
top-down pose estimation (BUCTD) that combines the
strengths of bottom-up and top-down methods. Specifically,
we propose to use a bottom-up model as the detector,
which in addition to an estimated bounding box provides a
pose proposal that is fed as condition to an attention-based
top-down model. We demonstrate the performance and efficiency
of our approach on animal and human pose estimation benchmarks.
On CrowdPose and OCHuman, we outperform previous state-of-the-art
models by a significant margin. We achieve 78.5 AP on CrowdPose
and 47.2 AP on OCHuman, an improvement of 8.6% and 4.9% over
the prior art, respectively. Furthermore, we show that our
method has excellent performance on non-crowded datasets
such as COCO, and strongly improves the performance on multi-animal
benchmarks involving mice, fish and monkeys.
</p>
</div>
<div class="row">

<div class="col-md-4 mb-3">
<video width="100%" autoplay loop muted preload="auto">
<source src="../source/video/buctd-iccv.mp4" type="video/mp4">
</video>
</div>

</div>

<div class="row">

<div class="col-md-4 mb-3">
<img src="../source/gif/buctd-1.gif" alt="GIF">
</div>

<div class="col-md-4 mb-3">
<img src="../source/gif/buctd-2.gif" alt="GIF">
</div>

<div class="col-md-4 mb-3">
<img src="../source/gif/buctd-3.gif" alt="GIF">
</div>

</div>


<div class="row pt-4">
<h3>
<i class="fas fa-graduation-cap"></i>
BibTeX</h3>
</div>
<div class="row">
<p>Please cite our paper as follows:</p>
</div>
<div class="row justify-content-md-center">
<div class="col-sm-10 rounded p-3 m-2" style="background-color: rgb(20,20,20);">
<small class="code">
@misc{zhou2023iccv,<br/>
&nbsp;&nbsp;title={Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity},<br/>
&nbsp;&nbsp;author={Mu Zhou and Lucas Stoffl and Mackenzie W. Mathis and Alexander Mathis},<br/>
&nbsp;&nbsp;year={2023},<br/>
&nbsp;&nbsp;journal={IEEE/CVF International Conference on Computer Vision}<br/>
}
</small>
</div>
</div>

<div class="row">
<small class="text-muted">Webpage designed using Bootstrap 5 and Fontawesome 5.</small>
<a href="#" class="ml-auto"><i class="fas fa-sort-up"></i></a>
</div>

</div>
</div>

</div>
</body>

</html>
Binary file added docs/source/gif/.DS_Store
Binary file not shown.
Binary file added docs/source/gif/buctd-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/gif/buctd-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/gif/buctd-3.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 6234d6b

Please sign in to comment.