Skip to content

apehex/llaminate

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llaminate

Optimized version of llama3, using tokun.

Neural tokenization

This project is a showcase for a neural tokenization technique. Since the inputs are compressed and have a smaller shape, the LLM is downsized accordingly.

For example, llama3-8b is brought down to 34 million parameters instead of 8 billion.

Installation

Usage

Resources

Models

Notebooks

Final model:

TODO

See TODO.

Credits

This project winks at llama3 from Meta, but doesn't actually its weights nor code.

License

Licensed under the aGPLv3.

About

Optimized llama3 using tokun

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published