Project: Themistoklis | Python to ocean 3d world | Round 20

Project Name

Project: Themistoklis


Project Category

Build & Integrate


Proposal Earmark

2nd/3rd Grant


Proposal Description

Creation of 3D Engine based on Panda3D in python, with connections to Ocean Marketplace, for synchronization between clients

https://www.panda3d.org/

This way, users will be able to sync scene data between clients in real-time using ocean marketplace inside the engine, so all users can have the same information and map data without needing a real-time map data stream between them.


Grant Deliverables

  • Panda3D system, that can load models easily, camera, headless and headfull
  • Panda3D examples
  • Direct connection between Panda3D clients using TCP protocol
  • Automatic sync between clients using the connections, with other clients (based on a pre-selected whitelist)
  • Sync with ocean marketplace assets between other clients

Project Description

What is Project Themistoklis?

It is best to illustrate our project through an example. Imagine there is an orange forest farm and you want to count the number of orange trees in the forest.

Source: Apple Orchard Early Morning Aerial by jimmadsen | VideoHive

Source: Orange trees detection with YOLO v5 in UAV Imagery | by Joao Otavio Nascimento Firigato | Medium

Figure 1: A Farm.

How would you do it?

There could be many ways but this is how Themistoklis intend to tackle it.

Firstly, fly a drone using our software stack to map the forest. As the forest is being mapped, data of the world is sent to a World Server that directs the data to its destination. In our case, the data reaches an object detection server. We would have many object detection models hosted on that server. We will use one that detects and counts the number of orange trees in the farm. Once counting is done, the count and the related data will be shared with us through an access control protocol like Ocean Protocol.

Our hardware stack

  • We are developing drones for our project fitted with different sensors
  • Sensors includes cameras, night vision cameras, and other sensors

Our software stack

  • Desktop/iOS client

  • Drive a drone or group of drones including using NLP

  • Directs data from drone/drones to World Server

  • World Server

  • Data is received from the client

  • Direct data to different servers as per the requirement of the project

  • Object detection models

  • Customised object detection models such as fire detection and orange trees detection models,

In short, Themistoklis aims to make it easy to capture real-world data with drones and offer off-the-shelf models to derive insights from the captured data. It also supports generating 3D of the real world which can be used in VR game development. These real-world data can be monetized through Ocean and other web3 protocols. This is articulated in Figure 3.

Another example is fire detection. The drones will be able to detect early fires in the world, while patrolling around sending a signal in case they catch one, in order to avoid the devastating effects of forest fires.

Figure 2: Fire detector on forest fire image.

Figure 3: Articulates what Themistoklis is.

Figure 4: Themistoklis development stack

Project Themistoklis is a project by artificialLeap. artificialLeap is developing both hardware and software necessary to make it easy for us and anyone to capture data from the real world and monetize it.

Figure 5: How Themistoklis’s different stacks interact


Final Product

Features:

  • Simple to use Android, iOS & Desktop (Windows, Linux, MAC) app to connect to remote drones (front-end - client)
  • Middle Server that handles the world building (unity or nodejs), a headless app that connects with Thoth (AI) and with the Client, will generate the world around the drones and get world-wise information to the AI
  • Everything will communicate together using TCP sockets, for ease of use and faster movement of data.
  • The AI will be able to generate commands based on the input from the Drones and the MiddleServer, will be fetching objects around each drone, real-image visuals with objects detected (will be spawned in the World Server too)

Architecture:

The Project will consist of 3 Apps:

  • The Client (will be accessible to users and with this will be able to update the data and drones, also will have the connections with the drones, will be able to run on Windows, Linux, Mac, iOS or Android…
  • The MiddleServer/World Server, will be built with Unity or NodeJS, will create a 3D world with the world and the drones, will be a headless server, which will be able to run on Linux, Mac or Windows (headfull too).
  • The backend Server, which will be Thoth, it will handle the AI directly, will be connected to the World Server (TCP) and to the Client (Web Requests), will be doing all the heavy lifting and generate the output for each command. Thoth also includes a web editor (node graph) to easily create AIs.

Tech Stack:

Front End Application:

  • The client will be the API connector with the drones, it will be able to generate multiple connections at once.
  • Will be able to get information from the drones (camera, altitude etc)
  • Will send the generated commands (move towards, strafe etc) to the drone(s)
  • Save/Load Drone Formations, default speeds, max altitude etc

Middle Server - World Server:

  • Backend will generate a 3D world around the drone at a smaller scale with information visible to the camera, which will work like a small rendering game camera
  • The world will include all the drones that are connected to it
  • Each AI in the world will have a unique id, based on the client id for it
  • World Server will be able to generate 3D maps from the real world and export them easily as 3D models
  • All world servers will sync in an interval through blockchain so all drones will share the same world map

Back End Application - Thoth:

  • Thoth will include all the models and AI data that will be needed to generate the commands
  • Includes a Web Editor (Node Graph, like UE4) that lets the user create easily graphs for the AI behavior

Models Needed:

  • Computer Vision: Visual Object Detector - Python algorithm using ImageAI/Tensorflow to detect objects in images, includes scripts for various clients (screenshots, images, videos, game windows)
  • Decision Making: Will use NLP for decision making, probably OpenAI (GPT3) or GPT-J (which is open source and self-hosted). This will create a Word-Based World, which the AI will use and will send word-commands for the movement
  • Maybe More?

Drone Prototype:

  • There will be a prototype drone with more sensors needed for the best use of the AIs (although it will work normally with other types of drones too)
  • It will include all the basic sensors, but also a Thermal Camera and a Night Vision camera, for better results during night.

Web3 Integration - Blockchain:

  • Save/Load and Mint Plans for Drones (Formations, good speeds etc)
  • Static Website in Blockchain- Manager for the World Server/Drone Manager
  • Login System through Blockchain
  • Pub-Sub system using Blockchain Canisters ?
  • Implement Blockchain to unity Client for login and save/load data

Tests:


Value Add Criteria

  • Custom game engine built on Panda3D engine
  • Multiplayer option for Panda3D
  • Panda3D Ocean Marketplace support



Core Team

Alexandros Titonis

Role: AI Engineer & Full Stack Developer

Github: alextitonis · GitHub

LinkedIn: https://www.linkedin.com/in/alexandros-titonis-0ba061176/

Experience:

I’ve been working lately on Thoth a Node Web Editor for AI, i’ve worked on webaverse and xr-engine.

In previous years i focused working on Multiplayer games in Unity.


Funding Requested
15000


Minimum Funding Requested
6450


Wallet Address
0xd5bDc028aaEAb8D8fa1C5Bb577b6C4C3402559F6


It took me a little bit of digging, but I eventually found the project website, github, and twitter handle. I’m posting them here for reference. @alextitonis I think future proposals should include this information in the project description.

1 Like

Ah sorry my bad, I’ll add them

1 Like

I appreciate that the code for this project is open source. I reviewed past deliverables and am excited to see the Ocean-Sync repo evolving over time. I voted YES for this project in R20.

The proposal talks about publishing datasets to Ocean Protocol so that distinct drone operators can share their raw drone data such that a World Server can download it, and stitch it together with data from other drone operators. After that point, the unified dataset could be labeled by models running on the Computer Vision Server.

Question: Will the unified dataset built by the World Server or the labeled dataset built by the Computer Vision Server be published to Ocean Protocol?

Additional Comments:
In future rounds, I’d love to see a deeper analysis of the opportunity size and the type of customer that would likely consume the unified or labeled datasets. I’d also like to better understand the mechanism design to incentivize the system participants, especially the drone operators.

I’m also curious about the feasibility of using NLP models like GPT-3 or GPT-J to actually command a drone.

1 Like

Hey, thank you for the support!
We will give option to publish both, based on what user prefers.

For NLP, we will probably use GPT-J, but will need to do more research first for the final decision.

1 Like

Great project, great convo, thx alex!

David! Streer drones with GPT-3 NLP? Hells yeah! Thinking: “DUMP IT” :slight_smile: :heart:

1 Like

Project submitted deliverables:

Repositories:

Both repositories have how to run in their ReadMe.

Some screenshots

Admin:

Moving to accept