Lumnis API Documentation
IntroductionGetting StartedObtaining an API KeySending your first API callUsing the python SDKNote: Make sure you add import grequests to the top of the python fileRetrieving live data Retrieving live data for multiple factors asynchronously Making a direct http call Historical EndpointLive EndpointUnderstanding API responsesAPI Referencemicrofactor.pricemicrofactor.vpinmicrofactor.order_imbalancemicrofactor.kyle_lambda_signedmicrofactor.hasbroucks_lambdamicrofactor.amihuds_lambdamomentumfactor.macdmomentumfactor.obvmomentumfactor.tsmom regimefactor.hurst_exponentregimefactor.anderson_darlingregimefactor.shapiro_wilkregimefactor.kolmogorov_smirnovregimefactor.jarque_beraregimefactor.agostino_k2regimefactor.fractionally_differentiated_valuesregimefactor.change_point_detection (Coming Soon - Send email to contact@lumnis.io to join waitlist)regimefactor.chu_stinchcombe_white_statistics (Coming Soon - Send email to contact@lumnis.io to join waitlist)regimefactor.chow_type_stat (Coming Soon - Send email to contact@lumnis.io join waitlist)momentumfactor.market_momentum (Coming Soon - Send email to contact@lumnis.io join waitlist)Other Resources
Introduction
The LUMNIS API provides the most comprehensive factor dataset for crypto assets. We provide the following group of factors:
- Micro-structural Factors - These are factors derived from order book data and trade data. We process terabytes of data to extract the most relevant order book and trade factors that are delivered in real-time
- Regime Factors - We provide regime data that enables users to observe how strategies perform in different conditions of the market. For example, some strategies work better during high volatile regimes. We use proprietary algorithms for effectively determining regimes.
Getting Started
Obtaining an API Key
You can get an API Key by paying for the Lumnis subscription via stripe for any one of the following plans:
Current subscribers can manage their subscription via the following link: https://billing.stripe.com/p/login/dR602kgch6JGbpm288
Note: You are agreeing to our terms of services and disclaimer if you subscribe to any of our plans.
- Free Plan: Send an email to contact@lumnis.io to receive an API key
- Access to all essential factors (historical data only). 5 API calls /min. 20 API calls / hour. 100 API calls / mo. Limited Support. An API key will be sent from contact@lumnis.io within the next 24 hours.
- Trial Plan: https://buy.stripe.com/28odSR5PxaSBbvi8wz
- Access to all essential factors (historical data only). 100 API calls /min. 500 API calls / hour. 1k API calls / mo. Limited Support. An API key will be sent from contact@lumnis.io within the next 24 hours.
- $30 per month
- Basic Plan: https://buy.stripe.com/dR6cON91JaSBbvi28a
- Access to all essential factors (historical data only). 1k API calls /min. 5k API calls / hour. 10k API calls / mo. Limited Support. An API key will be sent from contact@lumnis.io within the next 24 hours.
- $250 per month
- Pro Plan: https://buy.stripe.com/8wM8yxa5NgcVeHudQQ.
- Access to all factors (historical data only). 10k API calls /min. 50k API calls / hour. 100k API calls / mo. Dedicated Support. An API key will be sent from contact@lumnis.io within the next 24 hours.
- $500 per month
An API key will be sent from contact@lumnis.io within the next 24 hours. Turnaround for API keys is typically under 24 hours.
Anyone with this API Key can use the Lumnis API. If it is compromised, please send an email and it can be deactivated or changed.
We currently provide four years worth of historical data. Starting from 2019-03-30.
Note: Regime, volume and risk factors are coming soon, and more micro-structural factors will be added over time. Feel free to reach out to contact@lumnis.org for any factors you'd like to see added to the API
At this time the LUMNIS API is in beta mode. This means that the way it works and the data it returns may change at any time. Breaking changes are rare, but do happen. Proper versioning will be introduced in a future release.
This documentation describes all of the available API calls and properties of the returned objects. If you have any questions, please reach out to contact@lumnis.io
Sending your first API call
There are five properties that you must include in every API call to access LUMNIS factors:
factorName
Any one of the items listed below in the API Reference.
exchange
Any of the following exchanges:- binance
- kraken(coming soon)
- bitmex(coming soon)
- bitfinex(coming soon)
- coinbase(coming soon)
asset
We support the top 15 crypto assets with the largest market cap (if supported by the exchange):- ["ADAUSD", "BTCUSD", "DASHUSD", "DOGEUSD", "DOTUSD", "ETHUSD", "LTCUSD", "NEOUSD", "XMRUSD", "XRPUSD", "XBTUSD", "SOLUSD", "BNBUSD", "AVAXUSD" "MATICUSD”]
timeFrame
We currently support the following timeframes:- Digital clock
- Minute (min)
- Hourly (hour)
- Dollar Clock (Coming Soon):
date
The date in which the user is interested in:- The format should be yyyy-mm-dd
- E.g.: 2022-08-23 or 2021-01-01
The api key will be passed in the header
api_key
A 40-character alpha-numeric string that gives you access to use the API.
With that in mind, the next step is to send a
GET
request to api.lumnis.io
with the appropriate values set.A good first API call would be to retrieve the
vpin
factor for xbtusd (bitmex exchange). Fill in your API key, then execute the following code. Using the python SDK
Run the following command in the terminal to install the python sdk
pip install lumnisfactors
Run the following code to retrieve data
import grequests from lumnisfactors import LumnisFactors #Add your API KEY API_KEY = "" lumnis = LumnisFactors(API_KEY) #Retrieve a single days worth of data df_single_date = lumnis.get_single_date_data("vpin", "binance", "ethusdt", "hour", "2022-01-01") #Retrieve multiple days worth of data # Limit to 100 days to avoid excedding API throttling limits, unless you have a PRO plan historical_data = lumnis.get_historical_data("vpin", "binance", "ethusdt", "hour", "2022-01-01", "2022-02-01")
Note: Make sure you add import grequests to the top of the python file
Retrieving live data
live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get #the maximum number of returned values is 1000 for minute timeframe and 24 for hour timeframe
Retrieving live data for multiple factors asynchronously
factors = ['rsi', 'vpin', 'order_imbalance', 'kyle_lambda_signed', 'amihuds_lambda', 'hasbroucks_lambda', 'ffd', 'macd', 'obv', 'donchian', 'accumulation_distribution', 'tsmom', 'bvc', 'hurst_exponent', 'anderson_darling_norm', 'anderson_darling_expon', 'shapiro_wilk','kolmogorov_smirnov', 'jarque_bera', 'agostino_k2'] live_data = lumnis.get_multifactor_live_data(factors, "binance", "ethusdt", "min", 100)
Making a direct http call
Historical Endpoint
API_BASE = "https://api.lumnis.io/v1" PARAMS = "/historical?factorName=%s&exchange=%s&asset=%s&timeFrame=%s&date=%s" % (factorName, exchange, asset, timeFrame, date)
Live Endpoint
API_BASE = "https://api.lumnis.io/v1" PARAMS = "/live?factorName=%s&exchange=%s&asset=%s&timeFrame=%s&offset=%s" % (factorName, exchange, asset, timeFrame, offset)
import pandas as pd import requests ### Define parameters API_KEY ={YOUR API KEY HERE} factorName ="vpin" exchange ="binance" asset ="btcusdt" timeFrame ="min" #"hour" date ="2022-08-23" ### Make call to API API_BASE = "https://api.lumnis.io/v1" PARAMS = "/historical?factorName=%s&exchange=%s&asset=%s&timeFrame=%s&date=%s" % (factorName, exchange, asset, timeFrame, date) url = API_BASE + PARAMS res = requests.get(url, headers={"x-api-key": API_KEY}) ### Process data from API data_api = pd.DataFrame(json.loads( res.json()['data'] ))) data_api.drop_duplicates(inplace=True)
You can send API calls directly in your web browser, using cURL from a command line, or with your programming language of choice.
Understanding API responses
All API calls will return JSON with both a
success
and a data
property. Exceptions to this will be specified in the documentation.You should always attempt to JSON decode the response, then use the success property to determine if the API call succeeded.
{ "success": true, "data": {...} }
The
data
property of errors will include an error_message
and an error_code
to help you determine what went wrong. Non-zero error codes should never change so you can rely on them to make programming flow choices if necessary.{ "success": false, "data": { "error_message": "Example Error", "error_code": 123 } }
API Reference
microfactor.price
Historical price data
Arguments
factorName = "price"
Example Request
from lumnisfactors import LumnisFactors factorName = "price" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
microfactor.vpin
VPIN (Volume synchronised Probability of Informed Trading) is a measure of order flow toxicity. Order flow toxicity represents the risk of adverse selection.
Method
Volume bars are grouped into buckets of equal traded volume and labelled as buyer or seller initiated. Order imbalance is computed for each bucket and VPIN values obtained.
Usage
If market makers believe toxicity is high they will liquidate their positions and leave the market. This is one of the plausible explanations for the Flash Crash.
Mathematical Formula
Where
Order Imbalance, Volume Bucket Size, # of Volume Buckets, Expected Order Imbalance
Arguments
factorName = "vpin"
Example Request
from lumnisfactors import LumnisFactors factorName = "vpin" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view vpin.json response
The columns returned by the json are:
- vpin_5
- vpin_25
- vpin_50
- vpin_250
- vpin_500
- The number appended at the end of vpin determines the number of lookback windows used to compute the buy volume using the BVC algorithm.
- For example vpin_5 uses a lookback window of 5 to compute the BVC algorithm
References
microfactor.order_imbalance
Order Imbalance is a measure of the balance between buyer- and seller-initiated trades within a volume bar. Large imbalances could be random or induced by public or private information.
Method
The Bulk Volume Classification method is used to classify trades as either buyer- or seller-initiated. For each volume bar or bucket of volume bars, an order imbalance metric is defined as the difference between buyer- and seller-initiated trades.
Usage
An imbalance between buyer- and seller-initiated trades can lead to inventory issues that can pressure liquidity. Order imbalances sometimes signal private information and excess buying or selling is a determinant of market price movements.
Mathematical Formula
Where refers to the order imbalance in volume bucket
Arguments
factorName = "order_imbalance"
Example Request
from lumnisfactors import LumnisFactors factorName = "order_imbalance" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view order_imbalance.json response
The columns returned by the json are:
- order_imbalance_(loolbackWindow)
- signed_order_imbalance_(loolbackWindow)
- Where lookbackWindow can be any of the following - [5, 25, 50, 250, 500]
- The number appended at the end of the order imbalance determines the number of lookback windows used to compute the buy volume using the BVC algorithm.
- For example order_imbalance_5 uses a lookback window of 5 to compute the BVC algorithm
References
microfactor.kyle_lambda_signed
Kyles Lambda measures the relationship between price change and order flow imbalance. This is called market impact. It can be interpreted as the cost of a certain amount of liquidity over a given time period.
Method
Kyles Lambda is computed by regressing a time series of prices against a time series of signed volume or net order flow.
Usage
It can be used as a measure of market liquidity as it is an inverse proxy of liquidity. Higher values imply lower liquidity and market depth.
Mathematical Formula (Regression)
Where is the time series of prices, is the time series of aggressor flags(buyer-initiated or seller-initiated) and therefore is the net order flow.
Arguments
factorName = "kyle_lambda_signed"
Example Request
from lumnisfactors import LumnisFactors factorName = "kyle_lambda_signed" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view kyle_lambda_signed.json response
The columns returned by the json are:
- kyle_lambda_(loolbackWindow)
- kyle_lambda_t_value_(loolbackWindow)
- Where lookbackWindow can be any of the following - [5, 25, 50, 250, 500]
- The number appended at the end of Kyle lambda determines the window for computing the Kyle lambda statistics
References
microfactor.hasbroucks_lambda
Hasbroucks Lambda is a measure of price impact based on Trade-and-Quote data. It follows up on Kyles Lambda and can be interpreted as an approximation for the effective cost of trading, referred to as market impact.
Method
A similar regression to that used for Kyles Lambda is computed using Trade-and-Quote data and Gibbs estimates based on daily closing prices.
Usage
Allows the relationship between trading costs and returns to be explored. Useful for assessing the effects of trading costs in situations where high frequency data is not available.
Mathematical Formula (Regression)
Where represents price change, is the aggressor flag and is the dollar volume involved.
Arguments
factorName = "hasbroucks_lambda"
Example Request
from lumnisfactors import LumnisFactors factorName = "hasbroucks_lambda" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view hasbroucks_lambda.json response
The columns returned by the json are:
- hasbrouck_lambda_(loolbackWindow)
- hasbrouck_lambda_t_value_(loolbackWindow)
- Where lookbackWindow can be any of the following - [5, 25, 50, 250, 500]
- The number appended at the end of Hasbroucks lambda determines the window for computing the Hasbroucks lambda statistics
References
microfactor.amihuds_lambda
Amihuds Lambda explores the relationship between absolute returns and illiquidity. It measures the daily price response associated with one dollar of trading volume. It is a proxy of price impact.
Method
Daily close-to-close returns are regressed against daily dollar volume in a similar fashion to Kyle and Hasbroucks lambda.
Usage
Amihud argues liquidity reflects the impact of order flow on price. Usage is similar to Kyle and Hasbroucks Lambda in that higher values imply lower liquidity.
Mathematical Formula (Regression)
Where is the set of trades in bar , is the closing price of bar and is the dollar volume involved in trade
Arguments
factorName = "amihuds_lambda"
Example Request
from lumnisfactors import LumnisFactors factorName = "amihuds_lambda" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view amihuds_lambda.json response
The columns returned by the json are:
- amihud_lambda_(loolbackWindow)
- amihud_lambda_t_value_(loolbackWindow)
- Where lookbackWindow can be any of the following - [5, 25, 50, 250, 500]
- The number appended at the end of Amihuds lambda determines the window for computing the Amihuds lambda statistics
References
momentumfactor.macd
The Moving average convergence divergence (MACD) is a trend-following momentum indicator that shows the relationship between two moving averages of a security's price
Method
Calculate the half-life of the short time scale and long-time scale. We then compute the difference in two exponential weighted moving average with two different scales. We normalize with a moving standard deviation as a measure of the realized past 63 hours normal volatility. We normalize this series with its realized standard deviation over the short window.
Usage
Can be used to determine trends in the market and to gauge the strength of trends in the market
Mathematical Formula
Where , , is the moving standard deviation Short Lookback Window and is the Long Lookback Window
Arguments
factorName = "macd"
Example Request
from lumnisfactors import LumnisFactors factorName = "macd" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
References
momentumfactor.obv
OBV (On balance volume) is a momentum indicator that uses volume information to measure buying and selling pressure. It's based on the idea that changes in volume can precede changes in stock price. The OBV is calculated by cumulatively adding or subtracting the volume traded on a given day, depending on whether the stock price closes higher or lower than the previous day's close.
Method: If the current day's close price is higher than the previous day's close price, the volume traded on that day is added to the OBV value of the previous day. If the current day's close price is equal to the previous day's close price, the OBV value remains unchanged. If the current day's close price is lower than the previous day's close price, the volume traded on that day is subtracted from the OBV value of the previous day.
Usage: Can be used to determine trends in the market and to gauge the strength of trends in the market.
Formula:
Arguments
factorName = "obv"
Example Request
from lumnisfactors import LumnisFactors factorName = "obv" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
momentumfactor.tsmom
TSMOM (Time Series Momentum) is a momentum trading strategy that buys (sells) an asset if its returns have been positive (negative) over a specific time period.
Method: Calculate the returns of the asset over the specific time period. If the returns over the specified time period are positive, the strategy buys the asset. If the returns are negative, the strategy sells the asset.
Usage: Time series momentum can be used as a standalone trading strategy or as part of a multi-factor investment approach. It is often used by traders and investors to capture short-term momentum in the market and generate alpha.
Formula:
Where:
the return of the asset at time t
the price of the asset at time t
the price of the asset k days ago
k = the lookback window
Arguments
factorName = "tsmom"
from lumnisfactors import LumnisFactors factorName = "tsmom" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
regimefactor.hurst_exponent
The Hurst exponent is a statistical measure used to quantify the degree of persistence or anti-persistence in a time series. It is used to analyze the long-term memory of a time series and is commonly used in financial analysis to measure the degree of trendiness or mean reversion in asset prices.
Method: The Hurst exponent is calculated using a rescaled range analysis of a time series. This involves dividing the time series into smaller windows of equal length, calculating the mean and standard deviation of each window, and then computing the rescaled range of the time series. The Hurst exponent is then estimated by plotting the log of the rescaled range against the log of the window size and calculating the slope of the line of best fit.
Usage: The Hurst exponent can be used to determine whether a time series is persistent, anti-persistent, or random. A Hurst exponent greater than 0.5 indicates persistence or trendiness, while a Hurst exponent less than 0.5 indicates anti-persistence or mean reversion. A Hurst exponent of 0.5 indicates a random walk.
Formula:
Arguments
factorName = "hurst_exponent"
regimefactor.anderson_darling
The Anderson-Darling test is a statistical test used to determine whether a given sample of data comes from a specified distribution. The test is used to evaluate whether a sample of data comes from a normal distribution, although it can also be used to test other distributions by specifying the relevant parameters.
Method: The Anderson-Darling test calculates a test statistic based on the squared difference between the empirical cumulative distribution function (CDF) of the sample and the theoretical CDF of the specified distribution. The test statistic is then compared to critical values obtained from tables or calculated using Monte Carlo simulations. The null hypothesis is that the sample comes from the specified distribution, and the alternative hypothesis is that the sample comes from a different distribution.
Usage: The Anderson-Darling test is commonly used to test whether a sample of data comes from a normal distribution. If the test rejects the null hypothesis, it suggests that the sample does not come from a normal distribution, and further investigation may be necessary to determine the underlying distribution. The test is useful for assessing the fit of a distribution to a sample of data, and can be used in applications such as quality control, finance, and engineering.
Arguments
factorName = "anderson_darling_norm" or “anderson_darling_expon”
regimefactor.shapiro_wilk
The Shapiro-Wilk test is a statistical test used to determine whether a given sample of data comes from a normal distribution. It is a widely used test due to its high power and ability to detect deviations from normality in small to moderate-sized samples.
Method: The Shapiro-Wilk test calculates a test statistic based on the deviation of the observed sample data from the expected values under normality. Specifically, it tests the null hypothesis that the sample data comes from a normal distribution by comparing the observed sample data with the expected values under normality using a goodness-of-fit test. The test statistic is then compared to critical values obtained from tables or calculated using Monte Carlo simulations.
Usage: The Shapiro-Wilk test is commonly used to test whether a sample of data comes from a normal distribution. If the test rejects the null hypothesis, it suggests that the sample does not come from a normal distribution, and further investigation may be necessary to determine the underlying distribution. The test is useful for assessing the fit of a distribution to a sample of data, and can be used in applications such as quality control, finance, and engineering.
Arguments
factorName = "shapiro_wilk"
regimefactor.kolmogorov_smirnov
The Kolmogorov-Smirnov (KS) test is a statistical test used to determine whether a given sample of data comes from a specified distribution. The KS test can be used to test any distribution, but is most commonly used to test for normality.
Method: The KS test calculates a test statistic based on the maximum difference between the empirical cumulative distribution function (CDF) of the sample and the theoretical CDF of the specified distribution. The test statistic is then compared to critical values obtained from tables or calculated using Monte Carlo simulations. The null hypothesis is that the sample comes from the specified distribution, and the alternative hypothesis is that the sample comes from a different distribution.
Usage: The KS test is commonly used to test whether a sample of data comes from a specified distribution, such as the normal distribution. If the test rejects the null hypothesis, it suggests that the sample does not come from the specified distribution, and further investigation may be necessary to determine the underlying distribution. The test is useful for assessing the fit of a distribution to a sample of data, and can be used in applications such as quality control, finance, and engineering.
Arguments
factorName = "kolmogorov_smirnov"
regimefactor.jarque_bera
The Jarque-Bera test is a statistical test used to determine whether a given sample of data comes from a normal distribution. The test is based on the skewness and kurtosis of the sample data, and is particularly useful for testing the normality of financial data.
Method: The Jarque-Bera test calculates a test statistic based on the skewness and kurtosis of the sample data. The test statistic is then compared to critical values obtained from tables or calculated using Monte Carlo simulations. The null hypothesis is that the sample comes from a normal distribution, and the alternative hypothesis is that the sample does not come from a normal distribution.
Usage: The Jarque-Bera test is commonly used to test whether a sample of data comes from a normal distribution. If the test rejects the null hypothesis, it suggests that the sample does not come from a normal distribution, and further investigation may be necessary to determine the underlying distribution. The test is useful for assessing the fit of a distribution to a sample of data, and can be used in applications such as finance, economics, and engineering.
Arguments
factorName = "jarque_bera"
regimefactor.agostino_k2
The Agostino's K-squared test is a statistical test used to determine whether a given sample of data comes from a normal distribution. The test is based on the skewness and kurtosis of the sample data.
Method: The Agostino's K-squared test calculates a test statistic based on the skewness and kurtosis of the sample data. The test statistic is then compared to critical values obtained from tables or calculated using Monte Carlo simulations. The null hypothesis is that the sample comes from a normal distribution, and the alternative hypothesis is that the sample does not come from a normal distribution.
Usage: The Agostino's K-squared test is commonly used to test whether a sample of data comes from a normal distribution. If the test rejects the null hypothesis, it suggests that the sample does not come from a normal distribution, and further investigation may be necessary to determine the underlying distribution. The test is useful for assessing the fit of a distribution to a sample of data, and can be used in applications such as finance, economics, and engineering.
Arguments
factorName = "agostino_k2"
regimefactor.fractionally_differentiated_values
Fractionally Differentiated Values are data that have undergone a stationarity transformation that aims to preserve as much memory as possible. This allows differentiating a time series to stationarity without losing predictive power.
Method
Apply transformations to time series using weights and memory calculations outlined in mathematical formula and research literature.
Usage
Supervised learning algorithms require stationary data. Stationary data transformations usually result in memory loss from the series and a reduction in predictive power. Fractionally differentiated features make data stationary while preserving as much memory as possible.
Mathematical Formula
Where is the original series, is the fractionally differentiated one and are the weights. is a positive integer and is the window length, given a series of observations.
Arguments
factorName = "ffd"
Example Request
from lumnisfactors import LumnisFactors factorName = "ffd" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view ffd.json response
The columns returned by the json are:
- ffd_(fraction)
- Where fraction can be any of the following - [0.4, 0.6, 0.8]
- The number appended at the end of ffd factor determines the fraction used to compute the fractionally differentiated value
References
regimefactor.change_point_detection (Coming Soon - Send email to contact@lumnis.io to join waitlist)
Arguments
factorName = "cpd"
Example Request
from lumnisfactors import LumnisFactors factorName = "cpd" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
Click here to view cpd.json response
The columns returned by the json are:
- cpd_(loolbackWindow)
- Where lookbackWindow can be any of the following - [5, 25, 50, 250, 500]
References
regimefactor.chu_stinchcombe_white_statistics (Coming Soon - Send email to contact@lumnis.io to join waitlist)
Arguments
factorName = "chu_stinchcombe_white_statistics_one_sided" or “chu_stinchcombe_white_statistics_two_sided”
Example Request
from lumnisfactors import LumnisFactors factorName = "chu_stinchcombe_white_statistics_one_sided" or "chu_stinchcombe_white_statistics_two_sided" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
References
regimefactor.chow_type_stat (Coming Soon - Send email to contact@lumnis.io join waitlist)
Arguments
factorName = "chow_type_stat"
Example Request
from lumnisfactors import LumnisFactors factorName = "chow_type_stat" API_KEY = "" lumnis = LumnisFactors(API_KEY) df = lumnis.get_single_date_data(factorName, "binance", "ethusdt", "hour", "2022-01-01") live_data = lumnis.get_live_data(factor_name=factorName, exchange="binance", asset="ethusdt", time_frame="min", offset=100) #the offset parameter determines to amount of bars to get, the maximum is 1000
Example Response
References
momentumfactor.market_momentum (Coming Soon - Send email to contact@lumnis.io join waitlist)
Market Momentum refers to the time series momentum of all assets in a basket, which is calculated by summing up the individual momentum of each asset in the basket. It provides a measure of the overall market trend and helps in determining the direction and strength of the market.
Method: Summation of all TSMOM in a basket
Usage: To determine overall market movement
Formula:
Where:
- is the Whole Market Momentum
- is the Time Series Momentum of the i-th asset in the basket
- is the total number of assets in the basket.
Arguments
factorName = "market_momentum"