Reconfigurable AI Computing
Mar 23 (Afternoon) @ ASPLOS'26
Abstract: The rapid proliferation of AI workloads across devices with vastly different computational capabilities has created an ecosystem of irregular and dynamic runtime demands. No single dataflow or layout can optimally serve all workloads, motivating years of research into reconfigurable accelerators that can flexibly adapt their dataflow and layout to the workload’s needs. However, despite significant academic progress, no publicly available reconfigurable accelerator platform currently exists for the community to learn from or build upon.
This tutorial fills that gap by offering the first hands-on experience in learning, simulating, and deploying a reconfigurable accelerator, from hardware to compiler co-design. Participants will gain practical knowledge across three pillars:
- Pillar 1 – Hardware (FEATHER): We introduce FEATHER, a state-of-the-art reconfigurable accelerator architecture that enables low-cost switching between dataflows and layouts, efficiently supporting diverse workload patterns. The tutorial provides RTL simulation guidance and hands-on exercises to explore how FEATHER achieves this flexibility.
- Pillar 2 – Accelerator Design Language (ALLO): To enable scalable accelerator generation, we present ALLO, an Accelerator Design Language that allows participants to generate FEATHER variants of different scales with only minor frontend program modifications. This facilitates rapid scaling and evolution of the accelerator for various device classes and feature extensions.
- Pillar 3 – Compiler Infrastructure (ACT): Distinct workloads demand specialized dataflow and layout mappings for optimal performance. We introduce ACT, an ecosystem that automatically generates software tools like compilers for the ALLO-generated FEATHER accelerator. The generated compiler explores the large space of dataflow and layout mappings to identify the most latency-efficient configurations for workloads.
By the end of the tutorial, participants will understand how to compile diverse workloads, generate scalable accelerators, simulate performance via deployable RTL models, and tune the system for real-world deployment scenarios, empowering the ASPLOS community to advance the next generation of reconfigurable AI accelerators.
RAIC Resources
ACT Ecosystem
Ecosystem that automatically generates software tools like compilers for the ALLO-generated FEATHER accelerator.
FEATHER
Reconfigurable accelerator architecture enabling low-cost switching between dataflows and layouts.
Allo
Hardware compiler and accelerator design language for scalable accelerator generation.
List of Topics
Overview of the tutorial
Introduce to reconfigurable AI Computing
- Flexible Dataflow Demand
- Flexible Layout Demand
- Various Deployment Scenarios
Jupyter Notebook Setup
(give people paper with credentials)
FEATHER – reconfigurable AI Accelerator
- Introduction of FEATHER microarchitecture (15 min)
- Introduction to the FEATHER ISA (5 min)
- Hands-on: RTL simulation for various layout choices
- Hands-on: RTL simulation for various dataflow choices
- Hands-on: Analytic Performance Estimation and standalone layout search
Allo – Python-embedded Accelerator Design Language (ADL)
- Introduction Talk (20 min)
- Hands-on: Distribute Credentials and environment setup for jupyter hub
- Hands-on: automated FEATHER generation via ALLO ADL
- Hands-on: configuration tuning of Allo-FEATHER
ACT Ecosystem – Automatically generating software support for accelerators
- Talk: Introduction to ACT Ecosystem (20 min)
- Hands-on: Quick tutorial on TAIDL (ACT’s ISA specification language) (10 min)
- Hands-on: Writing FEATHER ISA in TAIDL and generating its compiler (10 min)
- Hands-on: Compilation using the generated FEATHER compiler (10 min)
Conclusion and Ongoing Development
- Summary of key TakeAways (1 min)
- FEATHER – ongoing projects (call for actions)
- Allo – Niansong or Hongzheng
- ACT – Devansh (3~5 min)
Organizers

Jianming Tong
Georgia Institute of Technology
5th-year PhD candidate focusing on full-stack optimizations for efficiency and privacy of AI systems. Designer of FEATHER.

Niansong Zhang
Cornell University
5th-year PhD student. Research explores accelerator programming models, compute-in-SRAM techniques, and design automation.

Devansh Jain
UIUC
Ph.D. student. Primary research objective is to develop a unified compiler infrastructure for tensor architectures.

Tushar Krishna
Georgia Institute of Technology
Associate Professor in ECE. Research spans computer architecture, interconnection networks, and AI/ML accelerator systems.

Zhiru Zhang
Cornell University
Professor in ECE. IEEE Fellow. Research investigates new algorithms, design methodologies, and automation tools for heterogeneous computing.

Charith Mendis
UIUC
Assistant Professor. Research interests are at the intersection of compilers, program optimization, and machine learning.

Hongzheng Chen
Cornell University
5th-year Ph.D. candidate. Research interests lie in compilers, programming systems, and accelerator architecture.

Akash Pardeshi
UIUC
M.S. student. Research focuses on techniques such as equality saturation and e-graph applications to ML compilers.

Yujie Li
Georgia Institute of Technology
First-year MS student working on microarchitecture design and RTL simulation for FEATHER.