Quant Tech Workflow
Last updated
Last updated
We deploy a number of strategies to ensure our modeling is always up to date with the latest data to ensure trades are made with the best information possible.
We acquire tick-level trading data from multiple exchanges. This is the most granular data on offer from exchanges, giving us incredible detail into market movements. After obtaining this data, it is processed, and cleaned - we cross-reference the data to ensure it’s accurate - and then it is ready to be used in our quant tech modeling.
Once the data is clean and ready, we then set about using it to develop our own unique trading features. These may include moving averages, cumulative volume deltas (CVDs), volatility indices, order book dynamics, and more. By creating multiple features, it gives us a holistic model of the market, as opposed to relying on narrow data sets to make decisions.
The next step is to create the right learning environments for our quant tech. We adopt ensemble learning techniques, like supervised learning algorithms, and train them on historical data segments: training, validation, and test sets.
Each strategy undergoes rigorous backtesting, simulated on past market conditions. Cross-validation ensures the model's robustness, preventing overfitting.
Our unique system has risk management built in. There are a number of strategies we use including sets invalidations (where we ensure that previous technical analysis is wrong), diversified holdings, and exposure limits to any single asset.
In addition, we deploy a dynamic position sizing strategy. The size of the position depends on the model's prediction confidence and the current risk profile.
Each model undergoes periodic retraining to ensure it stays relevant to the market's evolving nature.
Though autonomous, human oversight is ever-present. Alerts for anomalies or drastic market changes ensure we can intervene when necessary
Detailed performance reports are generated, emphasizing transparency in every trade, profit, or loss.