AMMs in allow for digital assets to be traded automatically and without permission by using liquidity pools instead of market buy and sell orders in an order book.
DAMM (aka. dynamic automated market maker) is one of the main components that balance the various liquidity pools in our protocol. While many other popular liquidity protocols use basic constant formula AMM algorithms such as:
$$x*y=k$$
Sigmadex has been working on a refined dynamic automated market making algorithm optimized for reduced impermanent losses, increased liquidity, and overall protocol efficiency.
Sigmadex DAMM Goals
- Provide adequate liquidity near the price equilibrium
- Incentivize arbitrage opportunities by adjusting directional tx fees
- Obtaining accurate asset pricing equilibrium through an oracle and IV
- Balance and replenish liquidity pools through penalties

Why Dynamic
The traditional AMMs used in many other projects fall short of dynamically calculating variables necessary to provide the best possible ecosystem for crypto assets. Imagine wearing the same outfit everyday no matter the weather outside - that's how a traditional AMM approaches the market, with a set operational objective which is inherently flawed.
DAMMs can provide better output compared to their traditional AMM counterparts by accessing foundational data points on a continuous basis. Dynamically calculating multiple variables allows for a more robust and capable market maker that can adapt to changing market conditions. Below we theorize the components to include implied volatility into a automated market making architecture.
The Beginning
A constant product market maker algorithm utilizes the implied volatility and oracle price to dynamically concentrate liquidity to increase capital efficiency during periods of low expected volatility, while reducing impermanent loss during periods of high expected volatility.
Since the popularization of Uniswap, the innovation challenges in the AMM space have been concerned with the minimization of impermanent loss for the liquidity provider and the maximization of capital efficiency to reduce slippage for the swapper. Ideas upon solving these issues have revealed a solution similar to that of an iron triangle. Capital efficiency amplifies impermanent loss; negating impermanent loss exposes other loss vectors such as arbitrage loss thus reducing LP incentives to provide capital at all. To work around these opposing constraints a novel approach to liquidity concentration is developed and formed into a market making algorithm.
The solution presented in this post utilizes the implied volatility of an asset, similar to a weather prediction of the volatility based on the behavior of the assets overlaid options market to dynamically adjust the amount of volume around the market price. We propose that oracle market price and implied volatility is only utilized in the concentration of liquidity - the physical price paid by the user is still to be forged in the algorithm through arbitragers.
In traditional market makers it is up to the users to provide buy and sell orders at certain prices to generate depth or volume available at a certain price to match trades. Theoretically speaking: a lot of buy orders exist at low prices to get deals and a lot of sell orders exist at high prices to get a premium. Regardless of where they are on the price curve these opportunities or degrees of freedom to swap the assets are called $liquidity$. By matching these orders, most often at the highest buy order and lowest sell order, the market price formed.
A New Approach to Liquidity Concentration
The Black-Scholes options pricing model was a revolution in financial mathematics. Not only is it one of the best ways to determine the fair price of an option but it can be turned on its head to infer the markets expectations on the future volatility of an asset.
We start by inheriting the Chainlink price and implied volatility to construct the probability density function for the asset. This is where we expect prices to fall in the given time frame. In accordance with the financial 6-sigma event tolerance rule, we calculate the minimum and maximum price band for the asset (could be made lower with governance voting on a per asset basis).

$$P(x) = \frac{1}{\sigma\sqrt{2\pi}}\exp\left(\frac{-(x-\mu)^2}{2\sigma^2}\right)$$
We then find a liquidity distribution function that fills available liquidity under its curve. We solve for $a$ in the following equation:
$$\frac{a}{\sigma\sqrt{2\pi}}\int{\mu-6\sigma}^{\mu+6\sigma} e^{\frac{-(x -\mu)^2}{2\sigma^2}}\mathrm{d}x=
\sqrt{qaqb}$$
Which rather thankfully is roughly equal to the liquidity:
$$\frac{\sqrt{qaqb}}{erf(3\sqrt{2})} = a \approx \sqrt{qaqb}$$
We than take a and draw a liquidity concentration function reminiscent of Uniswapv3 'Tick space' and allocate the available liquidity along this curve. Following our example, let us use $q_a$ = 1000, $q_b$ = 10, $\sigma$ = 1 and $\mu$ = 100
$$Depth(p) = \frac{\sqrt{qaqb}}{\sigma\sqrt{2\pi}} e^{\frac{-(p -\mu)^2}{2\sigma^2}}$$
Making a Swap
For example, let us consider a user who wishes to purchase one unit of $q_b$. Our depth graph than is subdivided into $ticks$, in effect buckets of liquidity where $p_2$ - $p_1$ is 0.0001% or one basis point. Each bucket has a price that is defined as its starting point along the depth-price curve. To begin our swap we calculate that:

Amount of liquidity available at the first price by first calculating basis point price change of the asset and integrating the curve over it:
$$\int{\mu}^{\mu+bp}Depth(p) = \int{\mu}^{\mu+bp} \frac{\sqrt{qaqb}}{\sigma\sqrt{2\pi}} e^{\frac{-(p -\mu)^2}{2\sigma^2}}dp$$
Following our example as we traverse the first tick:
$$\int{100}^{100.01} Depth(p) = \int{100}^{100.01} \frac{\sqrt{qaqb}}{1\sqrt{2\pi}} e^{\frac{-(p - 100)^2}{21^2}}dp = 0.398936$$
For this tick, $\approx 0.4$ liquidity points are exposed. We denote this $L_t$, the quantities of each asset at this tick is given as $q_{at}$ and $q_{bt}$ respectively. Given the following system of equations we calculate the quantity swapped for this $tick$:
$$Lt = \sqrt{q{at}q{bt}} = 0.398936$$
$$Pt = \frac{q{at}}{q{bt}} = 100$$
And we are trying to get $q_b$ out, one pays:
$$\sqrt{q{at}\frac{q{at}}{Pt}} = \sqrt{q{at}q{bt}} = Lt$$
$$\sqrt{q{at}\frac{q{at}}{100}} = \sqrt{100010} = Lt$$
$$qa = 3.98936$$
$$qb = 0.0398936$$
Around 3.98936 units of a to receive around 0.0398936 units of b, we than traverse to the next tick:
$$\int{100.01}^{100.02} Depth(p) = \int{100.01}^{100.02} \frac{\sqrt{qaqb}}{1\sqrt{2\pi}} e^{\frac{-(p - 100)^2}{21^2}} = 0.398896$$
and then solve the system of equations again to determine the swap:
$$Lt = \sqrt{q{at}q{bt}} = 0.398896$$
$$Pt = \frac{q{at}}{q{bt}} = 100.01$$
$$\sqrt{q{at}\frac{q{at}}{100.01}} = 0.39889$$
$$qa = 3.98916$$
$$qb = 0.0398876$$
We can note that over this tick, supplies 3.98916 units of $q_a$ for 0.0398876 units of $q_b$. This process continues until we allocate 1 full unit of $q_b$ for our swapping mechanism. Generally speaking the price for an amount determined by the other is given by the set of equations:
$$\sum{i=\mu}^x \sqrt{Li^2(\mu + i(bp))} = q{a-required}$$
$$\sum{i=\mu}^x \sqrt{\frac{Li^2}{\mu + i(bp)}} = q{b-bought}$$
Where:
$$Li = \int{\mu + i(bp) }^{\mu + 2i(bp)} \frac{\sqrt{qaqb}}{\sigma\sqrt{2\pi}} e^{\frac{-(p - \mu)^2}{2\sigma^2}}dp$$
where $bp$ is the one basis point of $\mu$, $q_a$ and $q_b$ are the total pool liquidity, and $q_{ai}$ and $q_{bi}$ is the liquidity of the current tick.
Closing Notes
An Automated Market maker that dynamically concentrates liquidity during low expected volatility to increase capital efficiency while doing the converse during periods of high expected volatility to reduce impermanent loss has been derived. It begins by inheriting the oracle price and implied volatility to generate a normal distribution around the average price of -6 to +6 ($k$) standard deviations, found through the implied volatility as is standard in traditional finance.
The total liquidity of the pool is then distributed along this curve in buckets of 1 basis point called ticks. A person wishing to swap tokens in the pool starts at the point of the curve the pool is at (not necessarily the oracle price) and takes from each successive bucket at the price of the bucket until their order is filled. The algorithm routinely updates itself, drawing parameters from the oracle as necessary to reform the curve. Parameters like the 6 $\sigma$ rule could be governance set on a per pool basis or even divided into pools with different overall risk tolerance.
