Decoding Llama3: An explainer for tinkerers
A not-so-quick 7-part guide to using the Llama3 open source AI model
Accepting submissions
Not accepting submissions
Read all the chapters of the 7-part Decoding Llama3 explainer by Simrat Hanspal below...
Decoding Llama3: Part 1 - Intro to Llama3The Announcement On 18th April 2024, Meta introduced pre-trained and instruction-fine-tuned language models with 8B and 70B parameters for a broad range of use cases, including improved reasoning. more
|
Decoding Llama3: Part 2 - Understanding the configurationWe are continuing our series of Decoding Llama3 with the overview of model architecture in this blog. more
|
Decoding Llama3: Part 3 - NormalisationWe are continuing our series of Decoding Llama3 with Normalisation in this blog. more
|
Decoding Llama3: Part 4 - Rotary Positional EmbeddingsWe are continuing our series of Decoding Llama3 with Rotary Positional Embeddings in this blog. more
|
Decoding Llama3: Part 5 - Grouped Query AttentionWe are continuing our series of Decoding Llama3 with Grouped Query Attention in this blog. more
|
Decoding Llama3: Part 6 - Feed Forward NetworkWe are continuing our series of Decoding Llama3 with Feed Forward Network and SiLU activation function. more
|
Decoding Llama3: Part 7 - Transformer Block & ModuleWe are continuing our series of Decoding Llama3 with Transformer block and Transformer module. more
|
Hosted by
Supported by