OceanDAO R2 Proposal: rugpullindex.com

OceanDAO R2 Proposal: rugpullindex.com

Summary

Project Overview

rugpullindex.com, launched in November 2020 by Tim Daubenschütz attempts to rank data sets by measuring their markets’ performance. We crawl all of Ocean Protocol’s markets daily and rank them by their liquidity and shareholder equality (Gini coefficient).

Using rugpullindex.com’s factor-investing-based algorithm, investors are now able to:

  • adjust their data set investments based on market data
  • invest in data sets relative to their market performance and hence lower their exposure risk
  • diversify their data set portfolio to reduce risk exposure

It’s widely-known that index-based investments yield superior results compared to stock picking.

For data scientists, rugpullindex allows making informed decisions about which data sets on the Ocean Marketplace are trustworthy and helpful for solving their problem.

rugpullindex.com is a young project with a long-term goal: creating a decentralized on-chain data set index end users can invest in (ERC 20, similar to Set Protocol).

Project Deliverables

Where We’re At

In the last three months, we delivered the prototype of rugpullindex.com. It’s a performant website that gets updated every day at midnight and shows our users reliable information about Ocean Protocol’s nascent data market.

We spent time on outreach and growth by writing a launch post and demoing the product to multiple communities (/r/datasets and /r/ethereum). While we’re aware that gaining trust from investors will only be earned by continuously delivering a reliable service, we think that interest in the product is growing organically. Here’s a graph of the monthly visitors to our website (Screenshot from Jan 19, 2021):

Our web site analytics are public.

Why We’re Asking For Funding

Our goal is to create an ERC20 token that allows an investor to gain exposure to the index’s top data sets. To do so, we’d like to follow the path of the DeFi Pulse Index. By receiving funding from the OceanDAO, we plan to implement the following milestones:

  • Improve the rating algorithm’s resilience towards Sybil-resistance. We plan to replace the “OCEAN-staked” metric with an “OCEAN-days destroyed” metric. Details here.
  • Increase the frequency of data pool retrievals (price, liquidity, etc.) daily, e.g., hourly.
  • Maintain and foster the website such that it stays performant and easy to use.
  • Document the inner workings of the algorithm publicly to improve transparency.
  • Work towards introducing an ERC20 token to allow direct investing in the index.

Return On Investment

This section’s goal is to show that rugpullindex’s return of investment potential is greater than the number of OCEAN tokens (15k OCEAN) we’re requesting.

A startup’s value often gets estimated by comparing it to the market it’s operating in. To assess rugpullindex.com’s value, we’d like to calculate its ROI by comparison.

According to our estimates, all S&P500 ETFs capture roughly 3% ($1T) of the total market cap of companies its representing ($31.66T) (see Appendix) [5, 6]. A similar and recently-launched project called DeFi Pulse Index is capturing 0,03% ($55M) of the total market cap of the tokens it’s representing ($15.5B) [3, 4].

According to Data Market Cap, the total market cap of data tokens is $78M [7]. Applying the market value capture of the S&P500 and DPI to the data token’s market cap, we get the following results:

  • S&P500 3%: $78M * 0,03 = $2,34M
  • DPI 0,03%: $78M * 0,0003 = $23400

We are aware that the data token market cap may be an unreliable metric as its markets are rather illiquid at the point of writing. So for transparency, let’s model the expected ROI by using the total value of OCEANs locked (~$610k) in data pools, too [1]:

  • S&P500 3%: $610k * 0,03 = $18300
  • DPI 0,03%: $610k * 0,0003 = $183

Therefore, we conclude that rugpullindex.com for three out of four model approaches yields a positive return. For more details, see the Appendix section of this document.

Team

It’s just me, Tim Daubenschütz, for now.

What qualifies me is that I have five years of professional experience working on blockchain and cryptocurrency projects. I’ve worked at:

  • BigchainDB GmbH as a product manager for BigchainDB. I’ve co-authored a protocol called COALA IP, which is a predecessor of the Ocean Protocol; and
  • I’ve worked with LeapDAO on scaling L2 and improving autonomous decision making within a decentralized autonomous organization (DAO).
  • I’m part of a hacker collective that builds blockchain games.

You can find more about me on my blog or GitHub.

Conclusion

In this proposal, we presented rugpullindex.com to the OceanDAO. Apart from receiving a grant, rugpullindex.com is interested in gaining more exposure through promotion.

Please vote for rugpullindex.com in round 2 of the OceanDAO Grants Vote on Feb 1, 2021.

Appendix

Case study: Relative Market Value Capture

DeFi Pulse Index

At the time of writing, 1M of the 419M OCEAN (~ $610k) in circulation (0,2%) are staked in data sets [1, 2]. According to Data Market Cap, this results in a data total token market cap of $78M [7]. A similar project, the DeFi Pulse Index, is currently tracking an ecosystem of $24.5B (value staked) using a token set called “DPI” [3]. Its market cap is at $55M and tracks the top ten performing DeFi projects, which have a combined market cap of roughly $15,5B [4].

For DPI, this means it can capture 0,03% ($55M/$15,5B) of the DeFi ecosystem. If rugpullindex.com can capture the same amount, it’s market cap would be at ($78M*0,0003) $23400.

Please note: The above calculations are rough estimates.

S&P500

At the time of writing, if we sum up the market cap of all company’s that the S&P500 represents, we get a total market cap of $31.66T [5]. By adding up the market caps of all S&P500 ETFs [6]:

IVV: $240.88B
VOO: $461.44B
SPY: $346,50B
SSO: $4,32B
UPRO: $1.83B
SPXL: $1.55B
SPXS: $481.56M
TOTAL: $1.06T

*RVRS, SPDN, SH, SPUU, PPLC, SDS, SPXU excluded because they are specialized indexes (reverse, leveraged, etc.)

For all S&P 500 ETFs, we end up at a market cap of $1.06T and hence at a relative value capture of $1.06T/31.66 = 0,03 or 3%.

Please note: The above calculations are rough estimates.

References

4 Likes

Wonderful work Tim! Great to have you here and I wish the best of luck for this awesome proposal.

2 Likes

You have our vote! Thanks for the great work and collaboration… and thank you for the APIs , we can’t wait to integrate them to alga. :star_struck:

1 Like

Yes, this is the network effect we want to see. Wonderful! Projects I love working together … keep the great news coming. :wink:

Hey Everyone :wave:

Thanks for all the encouraging comments!

Also: Don’t forget to vote :slight_smile:

1 Like

I really like this idea of dataset quality. Can you share how datasets are ranked? Or how datasets are chosen to be in the index?

Is this selection process related to statistical metrics of the data? Or perhaps metadata / labels following some standard spec? Is reputation important?

All in all very interesting! Good luck!

Hey @seldamat,

thanks for your interest. I’m happy that you like my work.

Now to your questions about how the ranking is created:

My thesis with rugpullindex.com is that data sets on OceanProtocl can be ranked without actually analyzing the data. Why? Because I think that data set investors have already done a lot of research. Hence, rugpullindex.com simply looks at the performance of a data set’s market.

We check two factors in particular:

  • The amount of liquidity that exists for a data set
  • The share distribution of liquidity providers (Gini coefficient)

With these two metrics, we believe that we get a good idea of how risky it is to invest in a data set. That’s what the score represents. A score closer to 100 means less risk, a score closer to 0 means more risk.

https://rugpullindex.com/changelog.txt covers a lot of the technical details of how we rank.
The launch post should give you a good overview of the project.

FYI: I’ve applied for round 3 here

[Deliverable Checklist]

@AlexN, we’ve improved the algorithm towards resilience:

For R2’s budget, we consider this task to be done. We’ll follow up if necessary with more resilience fixes.

  • [x] Improve the rating algorithm’s resilience towards Sybil-resistance. We plan to replace the “OCEAN-staked” metric with an “OCEAN-days destroyed” metric. Details here.

@AlexN @idiom-bytes

[x] Increase the frequency of data pool retrievals (price, liquidity, etc.) daily, e.g., hourly.

Update: We’ve been working towards this issue since a while now but we haven’t managed to prioritize it. It’d say now that we have the Erigon node and since we’ve separated the crawler into its own NPM package; we’re now closer than ever to implement this change. We can now make almost infinite API calls as we’re hosting our own node. And we’ve prepared the queries and the database to make a surgical change in an upcoming proposal. However, the budget in R2 has long been spent and so we’ll have to raise more to finish this.