Posthuman AI: Ocean DAO R7 Proposal

  • Name of project: Posthuman AI- v1.2

  • Project Website:

Github :

Twitter :

  • Proposal Wallet Address:


  • The proposal in one sentence:

This proposal is to fund our continued development of Posthuman v1.2, and publish 5 high-value AI models in collaboration with LUCI and other AI companies. Posthuman allows training and inference of advanced NLP models without viewing model parameters (i.e. ZK-training and inference), using Compute to Data.

  • Which category best describes your project? Pick one or more.

  • Build / improve applications or integrations to Ocean

Grant Amount Requested: 32000 OCEAN

Summary of Progress - v1.2

Since the last DAO round, we’ve made major upgrades to our codebase, focusing on developing commercially useful variants of the models (DistilBERT and DistilGPT2) and algorithims that we shared earlier (with v1)- to accure value for Ocean holders and to enable corporate AI use-cases directly from Ocean Market.

With Posthuman v1.2 we’ve developed and introduced an entirely new library of algorithms, under the title “QA-Commercial”. This is meant to interact with proprietary AI models (developed by Posthuman, LUCI, or other parties), to enable AI question-answering on 1000s of corporate documentation in a matter of seconds. Some of the key features of QA-Commercial:

  1. Integrates with DrQA library for ngram and TF-IDF shortlisting of paragraphs, which are then fed into the pipeline with a custom trained DistilBERT model for question answering on 1000s or even millions of pages of documents (instead of just 1 para, as enabled by Posthuman v1 models).

  2. Provides enterprise grade server capabilities for continuous question answering for upto 24 hours at a time. This includes not just a django server, but celery + redis worker allocation for load balancing, enabling rapid, parallel processing of multiple queries at the same time without any errors.

In addition to QA-Commercial algorithms, we’ve also developed the following:

  1. DistilBERT is trained for question answering only on Wikipedia. We’ve produced two models by fine-tuning this model on:

A. Proprietary corporate data, including commercial, email, and customer service documentation, for a commercially valuable version of the model (LUCI). This combined with QA-Commercial Algos enables largescale question-answering on Corporate Documentation with never seen before accuracy.

B. US Legal data - this allows models to read US law (including federal and state statutes, and judgements) to answer specific questions about the law.

The US Legal Model will bring the AI functionality to OCEAN market depicted in the video below:

[We will not be sharing the model parameters here; instead we will directly post them on Ocean Market(Polygon) once the bugs mentioned in the progress report are resolved - allowing this model to potentially accure 100s of $k in value]

  1. We plan on initiating discussions with PacerPro, Datanomy and DeepC, to hopefully onboard them as users and/or providers of AI models on Posthuman Market. More collaborations to come.

This is a small summary of the many functionalities we’ve added, please view the progress review doc for more information including links to specific files -
You can also view all the code from “QA-Commercial” here:

Next Steps - v1.2

Our progress with this milestone can be viewed by accessing the codebase as well as the progress review doc. We are facing a few technical bugs with publishing algorithms on the Ocean Market on Polygon - due to which we haven’t been able to post our proprietary models (LUCI etc.) yet. All preparation is in place and we will continue to work with Ocean team to have them published at the earliest.

Next, we plan to utilise 5000 OCEAN from this round’s grant to host bounties for people developing end-user apps using the LUCI NLP model provided by Posthuman AI on the Ocean Market (Polygon). This will incentivise users to find specific applications of our versatile AI model and perhaps attract many more end users.

Thirdly, we aim to step up our focus on collaborations and outreach with this grant - we hope to get large companies, including those mentioned above, to start utilising Posthuman AI as a viable replacement for either In-House AI development or API-based AI access models prevailing today. Our target is >5 companies/startups using AI models made available on Ocean via Posthuman, in the next 60 days.

We have updated the ROI calculation below to reflect our updated business plan in the short run.

Summary of Progress - v1

Thanks to the funding provided by the Ocean Community, Posthuman v1 Models are now live on the mainnet!

We’ve deployed a pretrained DistilGPT2 model as an asset on the mainnet, along with a template inference script. Users can edit this algo and get AI completions on any text, all within ocean protocol’s C2D ecosystem, with its accompanying guarntees of verifiability and privacy.

We’ve published a pretrained DistilBERT Question-Answering Model, and a pretrained DistilGPT2 model as an asset on the mainnet, along with template inference script for each model. Users can edit this algo and get AI completions/Answers on any text, all within ocean protocol’s C2D ecosystem, with its accompanying guarntees of verifiability and privacy.

This is the first time an AI model has been made available in a verifiable state using blockchain technology.

You can test the models for yourself! Below is a Video walkthrough for using the DistilBERT-QA model on the ocean market (NEW):

To test this with your own Question/Context pair, please edit the template QA algorithim:

Replacing the “Text” and “q1” variables with your own arguments. Contact us on discord to have your algorithim approved quickly. We will shortly migrate to Polygon to reduce the fee burden of the migration step.

Further details, including upcoming models, links to updated code, and a walkthrough of the additional functionality, can be found in our UPDATED Progress Tracking Document.

  • Posthuman x LUCI collaboration: We’re collaborating with LUCI, an enterprise Question-Answering AI software. LUCI has agreed to make their proprietary AI models available as assets on Posthuman. This will be the start of selling AI models with high commerical value on the Posthuman Market.

  • About LUCI
    LUCI’s proprietary AI models are trained on over 1,000,000 QA pairs from corporate documentation, memos, legalese and technical papers. They allow information retrieval with 71% F1 on the OpenSQUAD dataset (comparison - google custom search = 32%, leading published models = 58%). These models will be made available exclusively via Ocean protocol. [More -]

  • Collaboration with other AI providers: we are in active conversation with multiple large AI-as-a-service companies, and are working to help them monetize with Posthuman. We hope to introduce 1-2 additional high value commercial AI models, in addition to LUCI.

Project Overview

  • Description of the project:

Large transformer models have major commercial applications in audio, video, and text based AI. Due to the high cost of training and inference, it is not possible for most developers to utilise their own models, and thus they rely on centralised API access- which can be revoked at any time and comes with price and privacy concerns.

Posthuman tackles the following Problems:

  1. Ownership of Model Parameters of large transformer models is a crucial issue for developers that build on these models. If the API access can be unilaterally revoked or repriced, it makes for a very weak foundation for AI-based businesses.

  2. Next, there is a question of verifiability of claimed loss scores: it is nearly impossible to verify if a particular centralised API request was actually served by the model promised, and not a smaller, cheaper model.

  3. Further, private ownership of models gives rise to a culture of closed competition rather than open collaboration: every improvement on the model requires express permission, and the use of a model so improved is also entirely permissioned.

The Solution:

Posthuman is a Marketplace based on Ocean protocol that allows users to buy compute services on large NLP models. Model Providers contribute funds to train useful models, and Model Consumers purchase inference and evaluation on the models they find most useful. With Posthuman v0.2, Users can now train, infer, and evaluate on any arbitary text data.

Posthuman’s decentralised architecture achieves three goals that are impossible with centralised AI providers:

  • Verifiable Training and Inference: The end user can know for sure which model served a particular inference request

  • Zero-Knowledge training & ownership: The marketplace controls the models, ensuring each person who contributed to training is rewarded fairly, as all value created by these models remains on-chain and cannot be ‘leaked’.

  • Censorship-Resistant Access : Access to AI is fast becoming a basic necessity for a productive life, however such access can easily be censored by centralised providers. With a decentralised alternative, any holder of crypto is guaranteed to be treated equally by the protocol.

Value for Ocean

Ocean protocol will form the backbone of zero-knowledge model publication on the Posthuman Marketplace. Additionally, all inference requests will be on the ocean network due to the decentralised and zero-knowledge nature of the model- It will not be possible for an individual to run inference on a published model outside of the ocean ecosystem.

Ocean’s Value for Project

Ocean Protocol provides a market for dataset assets, compute and algorithms. Specifically, data-providers can expose their data for ‘compute-eyes only’, ensuring no data leaks. Here we apply this principle to share trained parameter values to further ‘compute eyes’ only for inference and fine-tuning, preserving the secrecy of model parameters and allowing repeated rewards to flow to those who participated in training it.

  • What is the final product (e.g. App, URL, Medium, etc)?

Posthuman Market will be a webapp, that serves various AI models as C2D data. Posthuman v1 will include NLP models, including all state-of-the art transformer models developed since the advent of BERT.

Posthuman tools will also be accessible via API, enabling app developers to directly integrate Posthuman inference in their AI applications.

  • How does this project drive value to the Ocean ecosystem? This is best expressed as Expected ROI, details here.

new - v1.2 calculation

After successful publication on the mainnet, we’re in a position to make a more practical and short run evaluation of ROI, in addition to the broad calculations presented before.

We already have 1 proprietary AI model ready for publication on Ocean, and at least 5 more such models in the pipeline, tailored to various enterprise use cases (i.e. finding information in text, accounting documentation, healthcare documentation, images, customer support bots etc.).

We confidently expect each such model to create at least $250,000 - $1,000,000 in additional OCEAN DATA Token consumption volume over the next 1 year. Taking these lower estimate, we arrive at the following conservative calculation for the next 1 year:

Total Funding (including this round): $50000 + future rounds
Expected increase in OCEAN Network demand over next 1 year, from 5 high value AI models: $1,250,000-$5,000,000
Expected chance of success: 90% (as our v1 prototype was successful we have upgraded our confidence in our success probability)
Expected ROI : 22.5 - 90

original calculation
NLP/Transformer models are in extremely high demand since their ‘imageNET moment’ in December 2019, when models of the BERT family exceeded human performance in Reading Comprehension (the SQuAD and GLUE benchmarks). The market for Transformer models is estimated at $30 Billion in 2021, however this is a highly illiquid market - as models cannot be traded without fear of leakage; and inference cannot be offered due to impossibility of any verifiability of model integrity. In short, if trust were to exist between model provider and consumer, that would enable this market to become highly liquid- such as on Posthuman. This means that a large percentage of AI models may soon trade on Posthuman as a way to maximize profits.

Posthuman takes a Marketplace cut of 25%, as Posthuman is also providing the hardware (GPUs) for the AI models to run. In addition, Ocean protocol recieves 0.25% of every datatoken transaction.

Assuming even 10% of NLP models are traded on Posthuman by the end of Y1:

  • that would account for over $3 Billion in increased demand for $OCEAN. [OCEAN Datatoken Consuming Volume]
  • and over $7.5 million in direct revenue to OCEAN. [Network Revenue]

The Bang/Buck Calculations thus work out as:

Bang = $7.5 million

Buck = $35,000 (+Future rounds)

Probability of success = 80%

Expected ROI = 171.42

Project Deliverables

IF: Build / improve applications or integration to Ocean, then:

Prior work:

We’ve already deployed v0.1 [single model fine-tuning] and v0.2 [multi-model and multi-dataset fine-tuning] of Posthuman on the Rinkeby testnet.

You can find an overview of our progress so far, including a deep-dive into the functionality, here:

Briefly, in v0.2, we’ve created a prototype allowing inference from any arbitrary NLP model in a verifiable, zero-knowledge state. We provide example scripts for various functionalities, including zero-knowledge ownership, and federated training.

While functional, the codebase has a few bugs and not yet ready for production - especially with regard to scaling the marketplace’s kubernetes architecture to handle multiple concurrent training/inference requests. The code sometimes faces OOM errors in the backend, which we suspect is due to insufficient hardware (we’re using minikube on 1 V100 processor), and lack of load balancing. We plan to address this in v0.3.

Secondly, we plan to integrate filecoin storage and secret network to our v0.2 prototype to add additional privacy and robustness. This will be completed over the next 2 months, including integration and testing cycles.

Third, we’ve also begun developing a custom UI for AI inference, to integrate with Posthuman market.

Finally, we’re working on API documentation to make it easy for developers to integrate AI models from Posthuman into their applications.


The central goals for this grant, deliverable over the next 30-45 days, are:

  • Engaging with AI partners LUCI to bring their enterprise AI to posthuman

  • Engaging AI companies by offering them a monetization oppourtunity; Onboarding next ~5 corporate customers.

  • Developing Posthuman Market deployment for allowing a larger variety of models

These form a part of our larger plan:

  • 20th April : Production-ready Rinkeby deployment, capable of handling concurrent request efficiently - invitation for community testing. [Posthuman v0.3]

  • 30th April : Partner with various AI companies for demo, onboard for launch, give advanced credits.

  • May : Publication on mainnet after fixing any bugs/errors. Includes API documentation for publishing, training, and using AI models. [Posthuman v1]

  • June : Develop commercially useful models on mainnet [Posthuman v1.2]

  • July: Bounties, hackathons, +5 commercially useful models, +5 corporate clients

Future Plans:

After a successful deployment of the NLP/Transformer based AI marketplace, we plan to expand to include image, speech, and large-scale reinforcement models (like for car driving), all available in the verifiable, zero-knowledge setting pioneered by Posthuman v1.

Project Details

If the project includes software:

  • Are there any mockups or designs to date?

Yes, besides the Github, we have the code overviews outlining the functionality in v0.2; and we also have a short video demonstrating the functionality from the command line on rinkeby:

Demo showing how a user can:

  • Monetise a trained NLP model with data tokens

  • Consume custom trained model inference endpoint without accessing model

  • Steps for repeating consumption flow for custom evaluation/training.

Deployed on Rinkeby testnet.

  • An overview of the technology stack?

AI Tech Stack

We experimented with many libraries for training large transformers, including DeepSpeed2 for very large models, and Reformer for large context sizes. In the end, we decided to utilise the huggingface-transformers library as it is the most versatile, offering hundreds of different kinds of transformer architectures under one library.

In particular, we’re likely to expand to including DeepSpeed at some point, as it allows training models with upto a trillion parameters- or 5 times larger than GPT-3. It would serve as a perfect test case of collaborative training - however it requires ~100+ GPUs, and is on our expansion roadmap.

We have currently tested v0.2 on one V100 GPU. One of the reasons for this proposal is expanding this to a production-ready 8 GPU kubernetes cluster.

Our initial tests were performed using GPT-2; i.e. using the transformers.GPT2LMHead module. Note, the ‘LMHead’ extension allows evaluation by computing loss scores when labels are provided.

Ocean Tech Stack

Trainers publish trained model parameters as an asset on ocean, handled by Posthuman protocol. The model is stored as a data asset on the marketplace’s hardware, and allows training and inference compute calls only. In this way, the actual parameters of the model remain a secret, and the datatoken remains the sole way to access that model (even for the creator). This eliminates any possible leakage of model parameters off-chain, preserving their on-chain value.

Team members

Dhruv Mehrotra

Hetal Kenaudekar

Additional Information

Our startup has also received ~$25,000 in Angel funding, from the founders of two $10M+ companies.

Project Links:


Progress report with code overview:

Current (WIP) v1.0 Codebase -

External Links:

Unfortunately there was no collaboration possible and despite multiple attempts to get in contact it did not happen, the chosen appointment by Posthuman AI was neither attended by them nor followed up on. This does not look like a serious project to me. So I will vote with NO for this proposal.

1 Like

That really does not sound too good. :frowning:

Hi Robin,
I’m sorry I was unable to collaborate, I did try and seek an appointment but recieved no response/confirmation from you. Perhaps there was some miscommunication.

We have been regularly updating our progress and responding to any queries raised on discord. Please feel free to express any questions you have about the project. We have worked hard on this project for over 5 months and are excited to bring valuable models to Ocean.

Please base your votes on our value added, not solely based on success of potential collaborations.
I hope to count on the continued support of the community as we strive to bring models that we’ve invested over $100k to produce for exculsive sale on Ocean Market.

Once again, we’re proud to have completed deliverables to the best of our ability and request the community to address any queries to us before voting NO when we are so close to publication of commercial models after months of development and testing.