After thinking a while about the points I layed out
in my previous post
I'd like to update my ideas here.
It is not necessary to read the first post to understand what I am talking
about in this second one, but it also does not do any harm.
Matrix and Mastodon are nice – but federation is only the first step – we
have to go towards fully distributed applications!
(me, at the 34. Chaos Communication Congress 2017)
The idea
With the rise of protocols like the matrix protocol, activitypub and others,
decentralized social community platforms like matrix, mastodon and others gained
power and were made real.
I consider these platforms, especially mastodon and matrix, to be great steps
into the future and am using both enthusiastically.
But I think we can do better. Federation is the first step out of
centralization and definitively a good one. But we have to push further -
towards full distributed environments!
(For a “Why?” have a look at the end of the article!)
How would it work?
The foundations how a social network on IPFS would work are rather simple.
I am very tempted to use the un-word “blockchain” in this article, but because
of the hype around that word and because nobody really understands what a
“blockchain” actually is, I refrain from using it.
I use a better one instead: “DAG” – “Directed Acyclic Graph”. Also “Merkle-Tree”
is a term which could be used, but when using this term, a notion of
implementation-details comes to mind and I want to avoid that. One instantly
thinks of crypto, hash values and blobs when talking about hash trees or
merkle trees. A DAG though is a bit more abstract concept which fits my ideas
better.
What we would need to develop a social network (its core functionality) on IPFS
is a DAG and some standard data formats we agree upon.
We also need a private-public-key infrastructure, which IPFS already has.
There are two “kinds” of data which must be considered: meta-data (which
should be replicated by as many nodes as possible) and actual user data
(posts, messages, images, videos, files).
I'm not talking about the second one here very much, because the meta-data is
where the problems are.
Consider the following metadata blob:
{
"version": 1,
"previous": [ "Qm...1234567890" ],
"profile": [ "Qm...098765", "Qm...54312" ],
"post": {
"mimetype": "text/plain",
"nodes": ["Qm...abc"],
"date": "2018-01-02T03:04:05+0200",
"reply": [],
},
"publicfollow": [ "Qm...efg", "Qm...hij" ]
}
The version
key describes the version of the protocol, of course.
Here, the previous
array points to the previous metadata blob(s).
We need multiple entries here (an array) because we want to create a DAG.
The profile
key holds a list of IPNS
names which are associated with the
profile.
The version
, previous
and profile
keys are the only ones required in
such a metadata blob.
All other keys shown above are optional, though one metadata-blob should
only contain one at a time (or none).
The post
table describes the actual userdata. Some meta-information is
added, for example the mimetype ("text/plain"
in this case) and the date it
was created. More can be thought of.
The nodes
key points to a list of actual content (again via IPFS hashes).
I'm not yet convinced whether this shall be a list or a single value.
Details!
I'd say that these three keys are required in a post
table.
The reply
key notes that this post is a reply to another post. This is, of
course, optional.
The publicfollow
is a list of IPNS hashes to other profiles which the user
follows publicly.
Whether such a thing is desireable is to be discussed.
I show it here to give a hint on the possibilities.
More such data could be considered, though the meta-data blobs should be
held small: If one thinks of 4kb per meta-data blob (which is a lot) and
10 million blobs (which I do not consider that much, because every
interaction which is a input into the network in one form or another results
in a new meta-data blob), we have roughly 38 GB of meta-data content, which is
really too much.
If we have 250 bytes per metadata-blob (which sounds like a reasonable size)
we get 2.3 GB of meta-data for 10 million blobs. That sounds much better.
The profile DAG
The idea of linking the previous version of a profile from each new version of
the profile is of course one of the key points.
With this approach, nobody has to fetch the whole list of profile versions.
Traversing the whole chain backwards is only required if a user wants to see
old content from the profile she's browsing.
Because of IPFS and its caching, content automatically gets replicated over
nodes as users browse profiles.
Nodes can cache either only meta-data blobs (not so much data) or user content
as well (more data). This can happen automatically or user-driven – several
possibilities here!
It is even possible that users “pin” content if they think its important to
keep it.
Profile updates can even be “announced” using PubSub so other nodes can then
fetch the new profile versions and cache them. The latest profile
metadata-blob (or “version”) can be published via a IPNS name.
The IPNS name should be published per-device and not per-account.
(This is also why there is a devices
array in the metadata JSON blob!)
Why should we publish IPNS names per-device and why do we actually need a DAG
here? That's actually because of we want multi-device support!
Multi-device support
I already mentioned that the profile-chain would be a DAG.
I also mentioned that there would be a profile
key in the meta-data blob.
This is because of the multi-device support.
If two, three or even more devices need to post to one account, we need to be
able to merge different versions of an account: Consider Alice and Bob sharing
one account (which would be possible!). Now, Bob loses connection to the
internet. But because we are on IPFS and work offline, this is not a problem.
Alice and Bob could continue creating content and thus new profile versions:
A <--- B <--- C <--- D <--- E
\
C' <--- D' <--- E'
In the shown DAG, Alice posts C
, D
and E
, each referring to the former.
Bob creates C'
, D'
and E'
– each refering to the former.
Of course both C
and C'
would refer to B
.
As soon as Bob comes back online, Alice notices that there is another chain of
posts to the profile and can now merge the chains be publishing a new
version F
which points to both E
and E'
:
A <--- B <--- C <--- D <--- E <--- F
\ /
C' <--- D' <--- E' <-----
Because Bob would also see another chain, his client would also provide a new
version of the profile (F'
) where E
and E'
are merged – one of the
problem which must be sorted out. But a rather trivial one in my opinion, as
the clients need only to do some sort of leader-election. And this election is
temporary until a new node is published – so not really a complicated form
of concensus-finding!
What has to be sorted out, though, is that the devices/nodes which share an
account and now need to agree upon which one merges the chains need some form
of communication between them. I have not yet thought about how this should be
done. Maybe IPFS PubSub is a viable option for this. Cryptographic signatures
play a important role here.
This gets a bit more complicated if there are more than two devices posting to
one account and also if some of them are not available yet – though it is
still in a problem space near “we have to think hard about this” ... and
nowhere in the space of “seems impossible”!
The profile
key is provided in the account data so the client knows which
other chains should be checked and merged. Thus, only nodes which are already
allowed to publish new profile versions are actually allowed to add new nodes
to that list.
Deleting content in the DAG
Deleting old versions of the profile – or old content – is possible, too.
Because the previous
key is an array, we can refer to multiple old
versions of a profile.
Consider the following chain of profile versions:
A<---B<---C<---D<---E
Now, the user wants to drop profile version C
. This is possible by creating
a new profile version which refers to E
and B
in the previous
field and
then dropping C
. The following chain (DAG) is the result:
A<---B <---D<---E<---F
\ /
-----------------
Of course, D
would now point to a node which does not exist. But that is not
a problem. Indeed, its a fundamental key point of the idea – that content may be
unavailable.
F
should not contain new content.
If F
would contain new content, dropping this content would become harder as
the previous
key would be copied over, creating even more links to previous
versions in the new profile version.
“Forgetting” content
Because clients won't traverse the whole chain of a profile, but only the
newest 10, 100 or 1,000 entries, older content gets “forgotten” slowly.
Of course it is still there and the device hosting it still has it (and other
devices which post to the same account, eventually also caching servers).
Either way, content gets forgotten slowly. If the user who published the
content deletes it, the network may be unable to fetch it at some point.
Is that bad? I don't think so! Important content gets replicated by others, so
if I post a comment on an article, I could (automatically or manually) pin the
article itself in my IPFS instance to preserve it.
If I do not and the author of the article thinks that it might not be that
interesting, the article may be deleted and gets unavailable to the network.
And I think that is fine. Replicate important content, delete unimportant
content. The user has the power to decide here!
Comments on posts (and comments)
Consider you want to comment on a post. Of course you create new content,
which links to the post you just commented.
But the person who wrote the original post does not automatically link to your
comment, so nobody is able to find your comment.
The approach for solving this is to provide updates to content.
An update is simply a new meta-data blob in the profile.
The blob would contain a link to the original post and the comment on it:
{
"version:" 1,
"previous": [ "Qm...1234567890" ],
"profile": [ "Qm...098765", "Qm...54312" ],
"post": {
"update": "Qm...abc",
"new-reply": "Qm...ghjjk",
},
}
The post.update
and post.new-reply
would link to meta-data blobs: The
update
one to the original post or the latest update on the post – the
new-reply
one on the post from the other user which provides a comment on the
post.
Maybe it would also be an option to list all direct replies to the post here. Details!
Because this works on both “posts” and “reply” kind of data, comments on
comments are possible.
Comments deep down the chain of comments would have to slowly propagate to the
top – to the actual post.
Here, several configurations are possible:
- Automatically include comments and publish new profile versions for them
- Publishing/propagating comments until some mark is hit (original post is
more than 1 month old, more than 100 comments are propagated)
- User can select other users where comments are automatically propagated and others have to be moderated
- User has to confirm propagation (moderated comments).
The key difference to known approaches here is that not the author of the original post permits
comments, but always the author of the post or comment the reply was filed
for.
I don't know whether this is a nice thing or a problem.
Unavailable content
The implementation of the social network has to be error-resistant, of course.
IPFS hashes might not be there, fetching content might not be possible
(temporarily or at all). But that's an implementation-detail to me and I
will not lose any more words about it.
Federated component
One might think “If I go offline with my node, my posts are not accessible if
nobody else is online having them”. And that's true.
That's why I would introduce a federated component, which would run a
stripped-down version of the application.
As soon as another instance connects and a new post is announced,
the instance automatically pins or caches it.
Of course, this would mean that all of these federated instances would pin all
content, which is surely not nice.
Posts which are pinned for a certain amount of time are most likely
distributed well enough so the federated component nodes can drop them...
maybe after 90 days, maybe after 10... Details!
Another issue with multi-device support would be subscribing privately to
another account. For example, if a user (lets call her Amy) subscribes to
another user (lets call him Sheldon) on her Notebook, this information needs
to be stored somehow.
And because Amys machines do not necessarily sync with each other, her mobile
phone may never know that following Sheldon is a thing now!
This problem could by solved by storing the “follow”-information in her public
profile. Although, some users might not like everyone to know who to follow.
Cryptographic things could be considered to fix visibility.
But then, users may want to “categorize” their friends, store them in groups
or whatever. This information would be stored in the public profile as well,
which would create even more noise on the network.
Also, because cryptography is hard and information would be stored forever,
this might not be an option as some day, the crypto might be broken and reveal
all the things that were stored privately before.
Another solution for this would be that Amys devices would have to somehow
sync directly, without others beeing able to read any of that data.
Something like a
CRDT
which holds a configuration file which is then shared
between the devices directly (think of a git-repository which is pushed
between the devices directly without accessing a server on the internet).
This would, of course, only work if the devices are on the same network.
As you see, I have not thought about this particular problem very much yet.
Discovering content
What I did not spend much time thinking about as well was how clients discover
new content.
When a user installs a client, this client does not know any IPFS peers – or
rather any “social network nodes” where it can fetch user profiles/data from -
yet.
Even if it knows some bootstrap nodes to connect to, it might not get content
from them if they do not serve any social network data and if the user does
not know any hashes of user profiles.
To be able to find new social network IPFS nodes, a client has to know their
IPNS hashes – But how to discover them?
This is a hard problem. My first idea would be a PubSub channel where each
client periodically announces their IPNS hashes. I'm not sure whether PubSub
nodes must be connected directly. If this is the case, the problem just got
harder. There's also the problem that this channel would be rather
high-volume as soon as the network grows. If each client announces their IPNS
hash every 15 minutes, for example, we get 4 messages per client each hour.
That's already a lot of bandwidth if we speak about 1,000, 10,000 or even
100,000 clients!
It is also an attack-vector how the system can be flooded. Not nice!
One way to think about this is that if only nodes which are subscribed to a
topic do also forward the topics messages
(like this comment suggests),
we could reduce the time between “publishing” messages in the topic.
Such a message would contain all IPNS hashes a node knows about, thus the
amount of data would be rather much. As soon as the network grows, a node would
need to send this message less and less often, to reduce the number of
messages and bytes send. Still, if each node knows 10,000 nodes and sends this
list once an hour, we get
bytes_per_hash = 46
number_of_nodes = 10_000
message_size = bytes_per_hash * number_of_nodes
bytes_per_hour = number_of_nodes * message_size
4,28 GiB of “I know these nodes” messages per hour. That does
obviousely not scale!
Maybe each client should offer an API where other clients can ask them about
which IPNS hashes they know. That would be a “pull” approach rather than a
“push” approach then, which would limit bandwidth a bit. This could even be
done via PubSub as well, where the channel name is generared from the IPFS
instance hash, for example.
I don't know whether this would be a nice idea.
Still, this would need some “internet-facing” software where clients need to
be able to talk directly to eachother. I don't know whether IPFS offers
functionality to do this in a simple way.
Either way, I have no solution for this problem yet.
Why IPFS?
Platforms like scuttlebutt or movim also implement distributed social
networks, why not use those? Also, why IPFS and not the dat protocol or
something else?
That question is rather simple to answer: IPFS provides functionality and
semantics other tools/frameworks do not provide. Most importantly the notion
that content is immutable, but also full decentralization (not federation like
with services like movim or mastodon, for example).
Having immutable content is a key point. The dat protocol, for example,
features mutable content as it is roughly based on bittorrent (if I understood
everything correctly, feel free to point out mistakes).
That might be nice in some cases, though I think immutability is the way to go.
Distributed applications or frameworks for distributed content with immutability
as core concept are better suited for netsplit, slow connections and
peer-to-peer applications.
From what I saw from the last weeks and months of looking at frameworks for
distributed content storage is that IPFS is way more mature than these other
frameworks. IPFS is build to replace existing contents and to stay, and that's a
nice thing to build applications on.
Remaining Questions
Some questions are remaining:
- Is it possible to talk to a specific node directly in IPFS? This would be
helpful for discovering content by asking nodes what profiles they know.
It would also be a nice way for finding consensus when multiple devices have
to agree on which node publishes a merge.
- How fast is IPFS with small files? If I need to traverse a long chain of
profile updates, I constantly request a small file, parse it and continue
requesting the previous node in the chain. That should be fast.
If it is not, we might need to introduce some “pack files” where a list of
metadata-nodes is provided and traversing becomes unnecessary with. But that
makes deleting content rather complicated, TBH.
That's all I can think of right now, but there might be more questions which
are not yet answered.
Problems are hard in distributed environments
Distributed systems involve a lot of new complexity where we have to carefully
think about details and how to design our system.
New ways to design systems can be discovered by the “distributed approach” and
new paradigms emerge.
Moving away from a central authority which holds the truth, the global state and
also the data results in a
paradigm shift
we really have to be careful about.
I think we can do it and design new, powerful and fully distributed systems
with user freedom, usability, user-convenience and state-of-the-art in mind.
Users want to have a system which is reliable, failure proof, convenient and
easy to use.
They want to “always be connected”.
I think we can provide such software.
Developers want nice abstractions to build upon, data integrity, failure-proof
software with simplicity designed into the system and reusable data structures
and be able to scale.
I think IPFS is the way to for this.
In addition, I think we can provide free software with free data.
I do not claim to know the final solution to any of the problems layed out in
this article.
Its just that I think of them and would love to get an open conversation started
on the whole subject of distributed social networks and problems that come with
them.
And maybe we can come up with a prototype for this?
tags: #distributed #network #open-source #social #software