Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
FPGA Hardware Acceleration for Ultra-Low-Latency Machine Learning: Architecture and System Level Implementation for High-Frequency Trading
Download
Alperen_Thesis_FBE.pdf
ALPEREN KOYUN.pdf
Date
2026-1
Author
Koyun, Alperen
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
318
views
0
downloads
Cite This
Mid-price prediction with ultra-low latency is a fundamental component in High Frequency Trading (HFT). This thesis explores the realization of XGBoost machine learning models on FPGA to achieve accurate predictions within sub-microsecond latency for HFT applications. To this end, model features are generated with integer semantics by construction, avoiding floating-point definitions, runtime normalization, and division operations. The XGBoost model is realized using a fully unrolled, LUT-based RTL implementation. Unlike existing LUT-based decision tree architectures, the proposed approach supports signed features and regression XGBoost model conversions, extending LUT-based inference beyond classification-only and unsigned-feature assumptions. The outcome of the thesis is a hardware-aware framework for low-latency, end-to-end deployment of XGBoost models on FPGA platforms. Unlike prior work limited to training or isolated inference, this thesis realizes a complete end-to-end system with hardware-based feature generation, for which no directly comparable implementations exist. The framework operates directly on reconstructed Limit Order Book (LOB) streams and incorporates hardware-oriented feature generation methodology, structured feature representation, and an automated flow for generating synthesizable hardware descriptions from LOB data. The framework is evaluated in terms of prediction accuracy, hardware resource consumption, and inference latency using an end-to-end FPGA realization. The results demonstrate that the proposed framework enables deterministic, sub-microsecond end-to-end inference on FPGA while preserving the behavior of the corresponding software model, enabling reliable deployment of machine learning–based mid-price prediction in latency-critical HFT systems.
Subject Keywords
FPGA
,
Low-latency Design
,
XGBoost
,
HFT
URI
https://hdl.handle.net/11511/118428
Collections
Graduate School of Natural and Applied Sciences, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
A. Koyun, “FPGA Hardware Acceleration for Ultra-Low-Latency Machine Learning: Architecture and System Level Implementation for High-Frequency Trading,” M.S. - Master of Science, Middle East Technical University, 2026.