Accepting submissions
Not accepting submissions
Language Models are Few Shot LearnersLanguage Models are few short learners is an important paper in the space of GenerativeAI and Natural Language Processing. It introduced GPT-3 and showed the capability of large language models to generalize as task-agnostic learners. more
|
RWKV: Reinventing RNNs for the transformer eraIf you stepped into language modeling and Natural Language Processing (NLP) in the last three years, you are excused for being less familiar with Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) Networks. Why? RNNs could not keep up with the unparalleled capabilities (pun intended) of Transformers and have since fallen out of favor as the go-to architecture in the modern deep le… more
|
PWL MAY 2024: Med-PaLM M: A generalist biomedical AI system that flexibly encodes and integrates multimodal biomedical dataAbout the Paper: Medicine is inherently multimodal and clinicians interpret data from a wide range of modalities when providing care, including clinical notes, laboratory tests, vital signs and observations, medical images, and genomics. more
|
Dota 2 with Large Scale Deep Reinforcement LearningOn April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges that will become increasingly central to more capable AI systems. more
|