New year, new brand, new website: DATALATTE.com
Part 1 - Proposal Submission
Name of Project: DATALATTE
Proposal in one sentence:
We empower internet users to monetize their own data and provide data scientists with access to non-identifiable users’ data using an AI Feature Store at an affordable price.
Description of the project and what problem is it solving:
In data-driven businesses, we see two types of pain points, those which affect data producers and those which affect data consumers.
Every day, internet firms harvest and monetize their users’ data. There is however no compensation for their digital labours. Since 2018, the general data protection regulation (GDPR) gives users (data producers) the right to access their personal data on each internet platform they use. Nevertheless, internet users (data producers) are often not aware of how to take advantage of GDPR’s power, accessing and monetizing their data.
Meanwhile, to acquire and retain customers, as well as to grow their business, companies need to harness insights from their customers’ data. For Data scientists (data consumers) it is crucial to find reliable data to build robust AI models and capture actionable insights. However, limited access to high-quality data often constrains model performance and data insights to meet business needs.
Our DATALATTE DApp aims to relieve these two types of pain points by being the bridge between data producers and data consumers. On the one hand, we empower internet users (data producers) to reclaim and monetise their personal data anonymously. On the other hand, we provide data scientists (data consumers) with the opportunity to access previously inaccessible, high-quality data for a small fee. At the heart of our platform are the four core pillars: trust, intelligence, community support and usability.
- Grant Deliverable 1: User dashboard frontend
- Grant Deliverable 2: Privacy-by-design pipeline architecture
- Grant Deliverable 3: Blog posts introducing web3 technicals to web2 users, Website SEO optimization
Which category best describes your project?
Which Fundamental Metric best describes your project?
Data Consume Volume
What is the final product?
Figure 1. Illustration of DATALATTE DApp platform.
The DATALATTE is a DApp platform with two main stakeholders (Data Producers and Data Consumers), four core DATALATTE functionalities (Audit Store, AI Feature Store, Data Advisor and DATALATTE Catalog) and a Data Marketplace powered by Ocean Protocol.
How does this project drive value to the “fundamental metric” (listed above) and the overall Ocean ecosystem?
Metric: “$ Data Token Consuming Volume”.
Our big vision at DATALATTE is: “Creating an open and fair data economy with unlimited, high-quality data”.
In order to achieve this, DATALATTE wants to enable every internet user (data producer) to easily, securely and quickly monetize all of their online generated user data. Consequently, we will eventually target a total addressable market of 4.66 billion active internet users worldwide - As of January 2021 [ref].
Together with the data consumers, these data producers form a new market that needs to be developed. Since the marketplace behind DATALATTE is a two-sided platform business model, the typical problems arise:
- without data, no consumers
- without consumers, no revenue and therefore no producers and no data. In other words a classic chicken and egg problem!
To address this problem, we have developed an extended growth-wheel and a corresponding strategy.
Figure 2. Extended growth-wheel.
The fundamental assumption: as soon as we bring qualitative and meaningful data to our marketplace, the data consumers will follow - this does not apply the other way around!
Therefore, the data products and user’s data are the kick-starts we need to build our marketplace. But one question remains: how do we get the data producers to release their data without immediately benefiting from it in the form of monetary compensation?
We have also taken this into account in our extended growth-wheel. We want to use gamification to build a community. Community members bring data into the marketplace by completing data quests. Completing these quests increases their share of the revenue (a piece of pie). This will lead to an increase in revenue in the long term, which will then attract more data producers. Everybody can get a piece of the pie, by owning one of our data-barista NFTs. Initially, to keep the entry barriers low, we will offer the NFTs completely free of charge in exchange for Netflix data. In addition, the growth of the community will increase our overall reach, which in turn will bring more data producers into our DATALATTE marketplace.
To drive data consumption we will establish a referrals and rewards system, once the marketplace is up and running. The comparatively large efforts on the data producer side are because this market has not yet established itself in contrast to the data consumer-market. In other words, there are already data consumers, but the fact that private individuals can monetize their data is new to most people!
All these efforts will eventually bring growth to our DATALATTE marketplace. This growth and the possibility to upload more and more different data sources will ensure that the value of the data and therefore the compensation rate will increase!
The global data monetization market is expected to grow at a compound annual growth rate of 6.02% and is estimated to reach up to US$ 200 billion in 2021 [ref]. The increase in the market results on the one hand from the increase in the data basis and on the other hand from the better use of the data itself. One thing that is not taken into account, however, is that it is currently oligopolies that offer their data or its insights - if all this data is combined in one marketplace, the result is a significantly higher monetary and informal value!
If we succeed in taking over at least 0.00000035% of this fast-growing market, this means a total transaction volume of 70K$ (R9,R11,R12 & R13 buck) is generated, to provide a ROI of at least 1.
Funding Requested: (Amount of USD your team is requesting - Round 13 Max @ $20,000)
Proposal Wallet Address:
Have you previously received an OceanDAO Grant (Y/N)? Yes
Team Website (if applicable): DATALATTE.com
Twitter Handle (if applicable): DATALATTE_
Discord Handle (if applicable): DATALATTE
Project lead full name: Hossein Ghafarian Mabhout
Project lead Email: firstname.lastname@example.org
Country of Residence: Portugal
Part 2 - Team
Hossein Ghafarian Mabhout (Amir), PhD, Founder, CEO
- Linkedin: https://www.linkedin.com/in/amirmabhout/
- 10 years circuit & system engineer, IEEE member
- 5 years web3 experience
Kai Schmid, MSc, Co-founder, CMO
- Linkedin: https://www.linkedin.com/in/kai-schmid
- 2 years experience as a startup coach
- Master’s degree in technology and product management
- Experiences in UX-Design and Online Marketing
Extended team and advisors:
Toktam Ghafarian, PhD, Co-founder, Head of AI development
- Linkedin: https://www.linkedin.com/in/toktam-ghafarian-3010a150/
- 8 years Assistant prof. in Computer Engineering and AI Dep. at Khayyam Uni.
- 5 years Head of Computer Engineering and AI Dep.
Amiro0o, Msc, Core Dev
- 5 years experience as a developer
- 3 years experience in image processing
Elaine Egan, Community and Social Media manager
- Linkedin: https://www.linkedin.com/in/elaine-egan-42b89433/
- 5 years web3 experience
- 2 years Ark community manager
- Founder @Carawebs
Mezli Vega Osorno, PhD, Visual advisor
- Linkedin: https://www.linkedin.com/in/mezli-vega-osorno-13008/
- 3 years working at Apple as creative in digital arts and user-friendly environment
- Freelancer in different art projects
Reza, Illustrator artist
- 10 years commercial illustrating
JJ, Msc, Core Dev, back-end
- 5 years full stack
- 3 years pipeline engineer
Amqa, Msc, Core Dev, back-end
- 5 years full stack
- 3 years pipeline engineer
Amirreza, Msc, Core Dev, Front-end
- 5 years app and web frontend development
Part 3 - Proposal Details
Project Deliverables - Category
We summarize the details of our deliverables under the Communication and Technical categories.
We have managed to collect 30 NetflixViewingHistory.csv through our NFT initiative. To fill the remaining 55 NFTs, we have rebranded our project in our new website: https://DATALATTE.com.
We will conduct keyword research to prepare target-specific content for the website and for social media. We will also develop a content plan for the coming quarter.
We plan to publish blog posts on introducing our bigger vision to motivate community growth and expand our social media efforts to other channels.
Web: We will be designing the front-end for our users to have access to their data wallet in our platform.
TokenSpice: After running initial Tokenspice simulations, we plan to update the model for our needs and run more accurate simulations.
Architecture: We are designing our agent based architecture using compute-to-data features. With the introduction of our new architecture, more sensitive data points from the users can be collected where privacy is at a higher risk.
An overview of the technology stack
- Athena, DynamoDB, Data Lake
- EC2, ECS, Lambda
- S3 and HDFS
- Data Science
- Python, Pyspark
- Scikit-Learn, TensorFlow, Transformers
- Matplotlib, Dash, Plotly, Keras
- APIs, Dashboards, Jupyter
- Web Development
Project Deliverables - Roadmap
Round 9-12 deliveries:
- Incorporation as a legal entity in Singapore, DATALATTE PTE LTD
- User interviews with users without web3 background
- Brand and identity design:
- Netflix watching history data audit module
- DATALATTE marketplace
- MVP architecture redesign
- MVP backend (fetch data API)
- MVP frontend
- IPFS node on AWS
- Four stories in our web3 story series with GPT-3
- Lightpaper DATALATTE
- NFT lightpaper & DATALATTE marketplace
- GPT3 retraining with Ocean blog
- NFTs landing
- Rebranding with a new website
- Published our first dataset on Ocean marketplace
- Tokenspice testbench draft
What is the project roadmap?
- Establish partnership with Ocean Protocol and launch DATALATTE marketplace
- Expand social media campaign with AMA’s
- Release technical whitepaper
- Brand building
- Release an MVP Web Dashboard for Data scientists to access DATALATTE resources
- Develop a Data Advisor model on AWS
- Develop an AI Feature Store pipeline PoC on AWS
- Design a Data Catalog on AWS
- Establish partnerships and collaboration in data alliances and other projects in the ecosystem
- Design legal APIs to collect users’ accessible but not exposed data with users’ permissions
- Enable users to select crypto/fiat currency and integration of DEX Swaps Plug-ins for ease of use for users to manage funds
- Integrate data sources to improve DATALATTE AI pipelines
- Grow community through social media campaigns and ambassador program
- Develop multi-chain wallet-connect
- Develop a cloud-agnostic strategy to switch between cloud providers or to split workload between providers
- Expanding data sources to more ecommerce and social media platforms
- Release DATALATTE mobile application (development version) in iOS and android app store
Do you have feedback or questions?!
Feel free to comment down below or join our Discord-Community