On September 2017, my team at TheoremOne embarked on a research project on decentralized applications. We set out to build a decentralized clone of Vine (the GIF sharing app) using technologies like IPFS and Yjs. We started from scratch, coding small features one at a time. We refactored once the new concepts were settling in our minds and came up with new concepts of our own.
You can read more on our mission to evaluate the maturity of IPFS’s network, how we weighed strategies for building decentralized apps and the problems we ran into here: The state of Frontend development with IPFS in 2017.
As we worked on this experiment, we came up with a large number of ideas for decentralized apps that we don’t have time to build ourselves, but would love to see other people build. This post lists 16 of them.
If you are not already familiar with building Dapps, we recommend you start by reading our post on Decentralized Apps Key Concepts, which will provide an overview of what you need to know to understand all the technical aspects of the ideas below.
Decentralized file collaboration and file sharing app with offline functionality.
Bring in the best features of file sharing and collaboration tools out there and mix them with distributed technologies while working directly from the user’s file system:
- Select folders to upload and set access rights to them and/or to individual files.
- Allow real-time collaboration on docs.
- Include new files simply by copying them to one of the selected folders.
- Work offline and merge the updates when reconnected.
- Access files when needed. The network allows to access files even if the author’s computer is turned off.
- Keep files under version control; allowing users to merge updates or to branch out.
Distributed password manager.
An encrypted distributed store of passwords. Available any time—high availability even while the user’s computer is turned off. Interconnected nodes would help maintain an always-accessible copy of the encrypted data—which can only be accessed by the user holding the master password.
When you are connected you would help replicate and distribute other users data. IPFS offers tools to store data in a decentralized network. A user that joins the network would receive the swarms data state—similar to a hash table referencing users data. Then they could submit their encrypted data and add it to the reference table.
When joining from another device, the data could be found using the reference to the hash table, encrypted, thus only the original uploader holding the key to decrypt has access to the actual data.
Where would the data be physically stored? Every node connected to the network would store and redistribute every piece of information. Only the original uploader holds the key to decrypt their own data, so only they can access it.
Every now and then Wikipedia displays banners asking the users for their support. They ask for donations to help out with server and bandwidth costs. With a decentralized infrastructure every user of Wikipedia could help out without having to donate money—they could help out with sharing computational resources. By replicating content and redistributing it, any user accessing can take part of a decentralized swarm that can help keep the server costs low and the uptime high.
Simply by accessing content, nodes can store the downloaded data, thus replicating content and helping distribute it to others.
Distributed package management.
One person broke the Internet one day and it happened again recently: some packages disappeared from npm. With the help of a decentralized network of interconnected nodes we could avoid this type of doomsday event: high availability would ensure packages could not be removed even if the author wanted to.
Picture a scenario where there are two people whose are nodes connected to this network: Alice and Bob. Alice requests a package and Bob’s computer receives the request. Since the package has not been built, Bob’s computer fetches the required files from the corresponding git repository. Once the repo has been cloned, Bob’s computer runs the
buildscript on the files and then uploads the corresponding built files to the network via IPFS. Bob’s computer receives the hash corresponding to the newly built files and sends it back to Alice. Alice now has access to the required package and her computer downloads the file and helps redistribute it.
Charlie comes along and joins the network. He requests the same package Alice requested. Either node (Alice’s or Bob’s) receive the request and now the package has been built, so Charlie receives the corresponding hash, downloads the file and helps redistribute it too.
This distributed package management idea wouldn’t necessarily be a replacement for things like npm, but instead it could enhance npm by adding millions of nodes in addition to their servers, all collaborating in a distributed network.
Distributed computing network + command center.
The previous ideas explored the concept of shared resources. It did so by looking at keeping high availability of data by sharing disk space usage and bandwidth resources. In addition to sharing such data, we also had the idea to explore sharing other resources: computational operations. As in the previous ideas, the interconnected swarm of computers can share data among all of them, and with the help of CRDTs keep this data normalized. But then, what if the data store held commands or operations to be executed.
The swarm could effectively respond to events in a real-time fashion and execute high workload in a distributed manner, delegating the operations between the interconnected nodes.
Distributed continuous integration.
A specific implementation of the previous idea. This comes from a real world problem: testing environments that don’t scale correctly when either the team grows and/or the number of tests grow and these are run in series. As the number of features and associated tests grow, the test run length grows with them. By distributing the tests via the interconnected nodes we can solve both problems.
Similar to the previous idea, an interconnected swarm of nodes could distribute tasks to the individual nodes; and the nodes execute the tasks they’re given. These tasks have the individual nodes execute containers and run apps inside those containers. A high number of nodes in the network can mean any container/app combination can be run at any given moment. Imagine using this type of network to offer cloud services: picture a decentralized EC2.
Distributed video analysis for augmented reality.
One type of augmented reality application involves reading from a video input source and analyzing the data for a match given a query–This usually means looking for an object or groups of objects in the video and relating them to some metadata.
When a match is found, a request for metadata is executed and the response can be embedded on top of the video source. Imagine an application that can take any movie as input and can recognize actors’ faces, fetch metadata from IMDB and display the given data near the actor’s face. Now imagine the same application looking for any other matches (cars, animals, books, etc).
This type of application can delegate the tasks of finding the different matches to different nodes in the swarm, effectively distributing the workload while having the network benefit from the independent efforts of each node. Whenever a node finds a match the information is distributed to all the nodes, thus all nodes will receive the metadata for the video source while only having to work on independent, smaller tasks.
Private networks for massive events.
In many places, connecting to the internet during massive events (concerts, soccer games, etc) can be an impossible task. By introducing a custom network, companies can provide their users with the full capabilities of real internet applications. The network can deliver those applications, users can execute them directly in their devices (similar to offline mode) and take further actions once the users can get back online.
Furthermore, the network can use its replicate & redistribute capabilities to provide real-time events to all the connected users. Such an isolated network can provide a second screen functionality to everyone attending, enhancing the audience’s experience. Take note that this would require the venue to provide enough access points for the attendees.
Private networks events with low or no connectivity.
A variation of the previous idea. This could solve a similar issue for events in places with bad or no connectivity: imagine airplane or bus trips, cruise ships, ski resorts and/or private corporate events. By joining the network any node can help the swarm redistribute any software in order to take part on activities designed for the event.
This could provide the software distribution layer as well as the business logic for the event. There wouldn’t be any need to connect to the outside world since the network would be able to deliver all the applications needed to interact as well as all the management tools.
Global ranking system.
It is possible to offer a decentralized ranking service using the IPFS network. This would solve the problem of having one central authority responsible for the leaderboard. Any person could take part in events/dynamics with a gaming layer that involve a ranking/hierarchical list of their users.
The idea is to leverage append only logs where users are given a rank level in the network. Rather than receiving points and having to compute the new state of the leaderboard every time an event takes place, users could trade ranking spots (similar to trading cryptocurrencies).
Any developer could implement a decentralized leaderboard layer into their software and their users would be the ones holding the data. There would be no need for backend software. Every node connected to the network could fetch the leaderboard and see the user rankings.
Decentralized video sharing app.
A distributed swarm of computers can excel at distributing large files: You’ve downloaded a video from the Internet and the person in the room next to you also wants to watch that same video.
Isn’t it silly that they have to re-download the video across the backbone of the Internet, from the same cloud server that you just downloaded it from? Why can’t they get it directly from you, sitting right in the next room to them? Wouldn’t that be so much faster? Distributed technologies can make this happen.
There are a couple of Dapps already working on this functionality:
Decentralized maps/cartography app.
Decentralized maps with real time updates from nodes. Imagine receiving the traffic updates directly from nodes and having the network distribute all this data. No central point of failure and less chance of getting directions into wild fires.
We found this experimental software to distribute map data over a peer to peer network: peermap. An added layer of disk space usage management would be needed so that the nodes do not need to download all the data but only the specific pieces that are required.
Still, maybe smartphones aren’t best suited to form mesh networks and share lots of data. This Dapp might need some extra help from long lived nodes that can contribute to the network, so that on-demand usage can still work.
Rather than other passengers on the road reporting traffic incidents to a central server like Waze, passengers in cars could communicate directly with one another. When paired with third party antenna systems (or having such mesh network capabilities built directly into future cars and phones), this would allow creating a system like Waze that works even in remote locations with bad connectivity. Additionally, as automated cars become more and more common, the ability for them to communicate quickly, directly car-to-car, becomes even more paramount. The development of a robust decentralized transport-communication network can also prevent one particular entity from becoming the arbiter of road safety.
Distributed software distribution.
Need an app? Get it from the closest node to you. This is the decentralized app store. All nodes in the network would share a normalized app listing or directory—when the directory grows too big in size it can be split into smaller listings. Any node could upload new software to the directory and fetch software from it.
Developers would definitely benefit from an app store that didn’t charge a big commission from their sales.
Decentralized state store for Dapps.
Any type of Frontend app that uses a store container similar to React’s redux or Vue’s vuex can implement their store using decentralized tools and have a swarm of interconnected web apps share the same state.
With a decentralized state store, apps would share data updates, generate new data, remove data and work in collaboration (as in some of the other ideas we’ve described above). This Dapp would work as an implementation layer for developers.
Enterprise for any of the above.
With private swarms a controlled network can include any option from above (and maybe mix them up), put that in a box and ship it out. Any company could use their existing infrastructure to integrate any of the distributed / decentralized ideas from the list.
Gaps in the current Dapp stack
We identified some problems that need to be solved before full delivery of the Dapps in the list can be achieved. These problems include:
Authentication & Identity.
Many of the Dapps in our list require encrypted data and access control. They require giving certain people—and no one else—the ability to view/collaborate on data. Distributed identity is a hard problem to solve, and we’re keeping an eye on developments in this space. Some emerging tools that have caught our eye are Metamask and Blockstack.
Disk space optimization.
For the app we made, all nodes in a Community download and distribute all the data in that community. This strategy obviously falls apart for apps with larger datasets. Can we come up with a strategy where nodes hold more data only if they will be connected for a longer time? How will this affect data availability? What pieces should short-lived nodes back up?
Proof of Work is far from the final solution for distributed consensus. Bitcoin hasn’t quite reached mainstream and it already has more than 200k pending transactions. Part of the scaling problem is directly attributable to its use of proof of work. Other strategies have come up like proof of stake, but there isn’t a clear winner yet.
Can we come up with a way to establish trust in a decentralized system that scales globally?
P2P live video streaming.
There are experiments out there to find a solution to this problem (resort) and some notes on this date back to 2012 on TechCrunch. IPFS can even be used to stream video files. This technology could open live streaming to publishers of any scale and incorporate directly into decentralized networks.
Even with all of these challenges in the way, we think that decentralized software is going to be extremely important in the years to come, and it’s something we will continue exploring and researching at TheoremOne.
What do you think about all of this? Tell us your idea for decentralized apps that you’d like to see!