Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorWolff, Ivo de
dc.contributor.authorSoest, Lars van
dc.date.accessioned2023-08-11T00:02:32Z
dc.date.available2023-08-11T00:02:32Z
dc.date.issued2023
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/44636
dc.description.abstractHardware acceleration is the method of accelerating calculations with hardware specifically designed for the type of calculations. Accelerate and TensorFlow are libraries that make this accessible to many programmers, but these libraries differ in the level of abstraction and targeted hardware. This thesis investigates the possibility of compiling and executing Accelerate programs in TensorFlow. A compiler is introduced that converts second-order Accelerate programs to first-order TensorFlow graphs, covering 68% of the Accelerate language.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectAccelerate is a language designed to compile generic parallel array instructions into hardware-specific code. This thesis proposes the addition of TensorFlow support to Accelerate’s compiler. TensorFlow is a machine learning library by Google. To compile Accelerate programs into TensorFlow, a new implementation is added to Accelerate’s compilation pipeline. This thesis investigates the possible extent of the instruction set to cover each Accelerate program and enumerates its limitations.
dc.titleCompiling Second-Order Accelerate Programs to First-Order TensorFlow Graphs
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsaccelerate;tensorflow;hardware acceleration;compilation
dc.subject.courseuuComputing Science
dc.thesis.id21642


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record