(FPT Preview) A Scalable FPGA Architecture for Non-linear SVM Training
- 👤 Speaker: Markos Papadonikolakis (Imperial College)
- 📅 Date & Time: Friday 28 November 2008, 11:00 - 12:00
- 📍 Venue: Mahanakorn Laboratory, EEE
Abstract
Support Vector Machines (SVMs) is a popular supervised learning method, providing state-of-the-art accuracy in various classification tasks. However, SVM training is a time-consuming task for large-scale problems. This work proposes a scalable FPGA architecture which targets a geometric approach to SVM training based on Gilbert’s algorithm using kernel functions. The architecture is partitioned into floating-point and fixed-point domains in order to efficiently exploit the FPGA ’s available resources for the acceleration of the non-linear SVM training. Implementation results present a speed-up factor up to three orders of magnitude of the most computational expensive part of the algorithm compared to the algorithm’s software implementation.
Series This talk is part of the CAS FPGA Talks series.
Included in Lists
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Friday 28 November 2008, 11:00-12:00