ADELT: Transpilation between Deep Learning Frameworks

ADELT: Transpilation between Deep Learning Frameworks

Linyuan Gong, Jiayi Wang, Alvin Cheung

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6279-6287. https://doi.org/10.24963/ijcai.2024/694

We propose the Adversarial DEep Learning Transpiler (ADELT), a novel approach to source-to-source transpilation between deep learning frameworks. ADELT uniquely decouples code skeleton transpilation and API keyword mapping. For code skeleton transpilation, it uses few-shot prompting on large language models (LLMs), while for API keyword mapping, it uses contextual embeddings from a code-specific BERT. These embeddings are trained in a domain-adversarial setup to generate a keyword translation dictionary. ADELT is trained on an unlabeled web-crawled deep learning corpus, without relying on any hand-crafted rules or parallel data. It outperforms state-of-the-art transpilers, improving pass@1 rate by 16.2 pts and 15.0 pts for PyTorch-Keras and PyTorch-MXNet transpilation pairs respectively. We provide open access to our code at https://github.com/gonglinyuan/adelt
Keywords:
Natural Language Processing: NLP: Applications
Machine Learning: ML: Adversarial machine learning
Natural Language Processing: NLP: Language models
Natural Language Processing: NLP: Machine translation and multilinguality