Open Source AI Hackathon 2024

GenAI makers and creators contest and showcase

Tickets

Loading…

This video is for members only

Sasank Chilamkurthy

@chsasank

Open-Source GPU Stacks in the Era of Proprietary Dominance

Submitted Jan 11, 2024

Open-Source GPU Stacks in the Era of Proprietary Dominance

In the rapidly evolving landscape of computing, the significance of Graphics Processing Units (GPUs) has transcended beyond traditional graphics rendering to become pivotal in advanced computational tasks, especially in AI and machine learning. Historically, NVIDIA’s CUDA platform has been at the forefront of this revolution, offering powerful tools for GPU programming. However, CUDA’s proprietary nature poses limitations for open-source development and cross-platform compatibility.

This situation underscores the imperative for an open-source alternative that democratizes GPU computing, fostering innovation and accessibility. Embracing open-source GPU stacks not only aligns with the ethos of collaborative development but also breaks down barriers imposed by proprietary ecosystems, offering a broader community of developers the opportunity to harness the power of GPUs. This approach is crucial for driving forward advancements in AI, where diverse and flexible computing solutions are essential.

By focusing on open-source technologies like Intel’s OneAPI and DPC++, we aim to explore and expand the horizons of GPU programming, moving towards a more inclusive and innovative future in high-performance computing.


Technology Stack Overview

  1. Intel’s OneAPI and DPC++: A unified programming model and an extension of C++ designed for heterogeneous computing across CPUs, GPUs, and FPGAs, offering a viable open-source alternative to CUDA.

  2. LLaMA (Large Language Model Meta AI): Meta AI’s family of large language models, showcasing the need for robust, scalable, and efficient computing platforms to support complex AI tasks.

  3. SPIR-V and MLIR (Multi-Level Intermediate Representation): Tools that bridge the gap between high-level programming and the hardware-specific code, crucial for optimizing performance across different architectures.

  4. Scheme Programming Language and C-FFI: Representing simplicity and flexibility in programming, these tools can be pivotal in creating more accessible interfaces for complex systems like MLIR.


Hackathon Ideas

  1. Adapting LLaMA with DPC++ for Intel Architectures: This project aims to port LLaMA models to run on Intel’s diverse architectures using DPC++, enhancing their performance and scalability.

  2. Mini-CUDA: A Language for Xe-MLIR Code Emission: The development of a new programming language tailored for Intel architectures, enabling developers to emit Xe-MLIR code directly, in a manner similar to CUDA’s capabilities for NVIDIA GPUs.

  3. MLIR-Scheme Interface via C-FFI: Creating an interface between the Scheme programming language and MLIR through C-FFI, this idea focuses on enabling direct generation of MLIR code from Scheme, simplifying the process for developers.


This hackathon not only serves as a platform for technical exploration but also as a catalyst for community-driven innovation in the realm of open-source GPU computing. By leveraging and enhancing these technologies, we aim to forge a path towards a more open, collaborative future in high-performance computing.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

The Fifth Elephant hackathons

Supported by

Host

Jump starting better data engineering and AI futures

Venue host

Welcome to the events page for events hosted at The Terrace @ Hasura. more

Partner

Providing all founders, at any stage, with free resources to build a successful startup.