musicmatzes blog

social

After thinking a while about the points I layed out in my previous post I'd like to update my ideas here.

It is not necessary to read the first post to understand what I am talking about in this second one, but it also does not do any harm.

Matrix and Mastodon are nice – but federation is only the first step – we have to go towards fully distributed applications!

(me, at the 34. Chaos Communication Congress 2017)

The idea

With the rise of protocols like the matrix protocol, activitypub and others, decentralized social community platforms like matrix, mastodon and others gained power and were made real. I consider these platforms, especially mastodon and matrix, to be great steps into the future and am using both enthusiastically. But I think we can do better. Federation is the first step out of centralization and definitively a good one. But we have to push further - towards full distributed environments!

(For a “Why?” have a look at the end of the article!)

How would it work?

The foundations how a social network on IPFS would work are rather simple. I am very tempted to use the un-word “blockchain” in this article, but because of the hype around that word and because nobody really understands what a “blockchain” actually is, I refrain from using it.

I use a better one instead: “DAG” – “Directed Acyclic Graph”. Also “Merkle-Tree” is a term which could be used, but when using this term, a notion of implementation-details comes to mind and I want to avoid that. One instantly thinks of crypto, hash values and blobs when talking about hash trees or merkle trees. A DAG though is a bit more abstract concept which fits my ideas better.

What we would need to develop a social network (its core functionality) on IPFS is a DAG and some standard data formats we agree upon. We also need a private-public-key infrastructure, which IPFS already has.

There are two “kinds” of data which must be considered: meta-data (which should be replicated by as many nodes as possible) and actual user data (posts, messages, images, videos, files). I'm not talking about the second one here very much, because the meta-data is where the problems are.

Consider the following metadata blob:

{
  "version": 1,
  "previous": [ "Qm...1234567890" ],

  "profile": [ "Qm...098765", "Qm...54312" ],

  "post": {
    "mimetype": "text/plain",
    "nodes": ["Qm...abc"],
    "date": "2018-01-02T03:04:05+0200",
    "reply": [],
  },

  "publicfollow": [ "Qm...efg", "Qm...hij" ]
}
  • The version key describes the version of the protocol, of course.

  • Here, the previous array points to the previous metadata blob(s). We need multiple entries here (an array) because we want to create a DAG.

  • The profile key holds a list of IPNS names which are associated with the profile.

The version, previous and profile keys are the only ones required in such a metadata blob. All other keys shown above are optional, though one metadata-blob should only contain one at a time (or none).

  • The post table describes the actual userdata. Some meta-information is added, for example the mimetype ("text/plain" in this case) and the date it was created. More can be thought of. The nodes key points to a list of actual content (again via IPFS hashes). I'm not yet convinced whether this shall be a list or a single value. Details! I'd say that these three keys are required in a post table. The reply key notes that this post is a reply to another post. This is, of course, optional.

  • The publicfollow is a list of IPNS hashes to other profiles which the user follows publicly. Whether such a thing is desireable is to be discussed. I show it here to give a hint on the possibilities.

  • More such data could be considered, though the meta-data blobs should be held small: If one thinks of 4kb per meta-data blob (which is a lot) and 10 million blobs (which I do not consider that much, because every interaction which is a input into the network in one form or another results in a new meta-data blob), we have roughly 38 GB of meta-data content, which is really too much. If we have 250 bytes per metadata-blob (which sounds like a reasonable size) we get 2.3 GB of meta-data for 10 million blobs. That sounds much better.

The profile DAG

The idea of linking the previous version of a profile from each new version of the profile is of course one of the key points. With this approach, nobody has to fetch the whole list of profile versions. Traversing the whole chain backwards is only required if a user wants to see old content from the profile she's browsing.

Because of IPFS and its caching, content automatically gets replicated over nodes as users browse profiles. Nodes can cache either only meta-data blobs (not so much data) or user content as well (more data). This can happen automatically or user-driven – several possibilities here! It is even possible that users “pin” content if they think its important to keep it.

Profile updates can even be “announced” using PubSub so other nodes can then fetch the new profile versions and cache them. The latest profile metadata-blob (or “version”) can be published via a IPNS name. The IPNS name should be published per-device and not per-account. (This is also why there is a devices array in the metadata JSON blob!)

Why should we publish IPNS names per-device and why do we actually need a DAG here? That's actually because of we want multi-device support!

Multi-device support

I already mentioned that the profile-chain would be a DAG. I also mentioned that there would be a profile key in the meta-data blob.

This is because of the multi-device support. If two, three or even more devices need to post to one account, we need to be able to merge different versions of an account: Consider Alice and Bob sharing one account (which would be possible!). Now, Bob loses connection to the internet. But because we are on IPFS and work offline, this is not a problem. Alice and Bob could continue creating content and thus new profile versions:

A <--- B <--- C <--- D <--- E
        \
         C' <--- D' <--- E'

In the shown DAG, Alice posts C, D and E, each referring to the former. Bob creates C', D' and E' – each refering to the former. Of course both C and C' would refer to B.

As soon as Bob comes back online, Alice notices that there is another chain of posts to the profile and can now merge the chains be publishing a new version F which points to both E and E':

A <--- B <--- C <--- D <--- E <--- F
        \                         /
         C' <--- D' <--- E' <-----

Because Bob would also see another chain, his client would also provide a new version of the profile (F') where E and E' are merged – one of the problem which must be sorted out. But a rather trivial one in my opinion, as the clients need only to do some sort of leader-election. And this election is temporary until a new node is published – so not really a complicated form of concensus-finding!

What has to be sorted out, though, is that the devices/nodes which share an account and now need to agree upon which one merges the chains need some form of communication between them. I have not yet thought about how this should be done. Maybe IPFS PubSub is a viable option for this. Cryptographic signatures play a important role here.

This gets a bit more complicated if there are more than two devices posting to one account and also if some of them are not available yet – though it is still in a problem space near “we have to think hard about this” ... and nowhere in the space of “seems impossible”!

The profile key is provided in the account data so the client knows which other chains should be checked and merged. Thus, only nodes which are already allowed to publish new profile versions are actually allowed to add new nodes to that list.

Deleting content in the DAG

Deleting old versions of the profile – or old content – is possible, too. Because the previous key is an array, we can refer to multiple old versions of a profile.

Consider the following chain of profile versions:

A<---B<---C<---D<---E

Now, the user wants to drop profile version C. This is possible by creating a new profile version which refers to E and B in the previous field and then dropping C. The following chain (DAG) is the result:

A<---B     <---D<---E<---F
      \                 /
       -----------------

Of course, D would now point to a node which does not exist. But that is not a problem. Indeed, its a fundamental key point of the idea – that content may be unavailable.

F should not contain new content. If F would contain new content, dropping this content would become harder as the previous key would be copied over, creating even more links to previous versions in the new profile version.

“Forgetting” content

Because clients won't traverse the whole chain of a profile, but only the newest 10, 100 or 1,000 entries, older content gets “forgotten” slowly. Of course it is still there and the device hosting it still has it (and other devices which post to the same account, eventually also caching servers). Either way, content gets forgotten slowly. If the user who published the content deletes it, the network may be unable to fetch it at some point.

Is that bad? I don't think so! Important content gets replicated by others, so if I post a comment on an article, I could (automatically or manually) pin the article itself in my IPFS instance to preserve it. If I do not and the author of the article thinks that it might not be that interesting, the article may be deleted and gets unavailable to the network.

And I think that is fine. Replicate important content, delete unimportant content. The user has the power to decide here!

Comments on posts (and comments)

Consider you want to comment on a post. Of course you create new content, which links to the post you just commented. But the person who wrote the original post does not automatically link to your comment, so nobody is able to find your comment.

The approach for solving this is to provide updates to content. An update is simply a new meta-data blob in the profile. The blob would contain a link to the original post and the comment on it:

{
  "version:" 1,
  "previous": [ "Qm...1234567890" ],

  "profile": [ "Qm...098765", "Qm...54312" ],

  "post": {
    "update": "Qm...abc",
    "new-reply": "Qm...ghjjk",
  },
}

The post.update and post.new-reply would link to meta-data blobs: The update one to the original post or the latest update on the post – the new-reply one on the post from the other user which provides a comment on the post. Maybe it would also be an option to list all direct replies to the post here. Details!

Because this works on both “posts” and “reply” kind of data, comments on comments are possible.

Comments deep down the chain of comments would have to slowly propagate to the top – to the actual post.

Here, several configurations are possible:

  • Automatically include comments and publish new profile versions for them
  • Publishing/propagating comments until some mark is hit (original post is more than 1 month old, more than 100 comments are propagated)
  • User can select other users where comments are automatically propagated and others have to be moderated
  • User has to confirm propagation (moderated comments).

The key difference to known approaches here is that not the author of the original post permits comments, but always the author of the post or comment the reply was filed for. I don't know whether this is a nice thing or a problem.

Unavailable content

The implementation of the social network has to be error-resistant, of course. IPFS hashes might not be there, fetching content might not be possible (temporarily or at all). But that's an implementation-detail to me and I will not lose any more words about it.

Federated component

One might think “If I go offline with my node, my posts are not accessible if nobody else is online having them”. And that's true.

That's why I would introduce a federated component, which would run a stripped-down version of the application.

As soon as another instance connects and a new post is announced, the instance automatically pins or caches it. Of course, this would mean that all of these federated instances would pin all content, which is surely not nice. Posts which are pinned for a certain amount of time are most likely distributed well enough so the federated component nodes can drop them... maybe after 90 days, maybe after 10... Details!

Subscribing (privately) and other private information

Another issue with multi-device support would be subscribing privately to another account. For example, if a user (lets call her Amy) subscribes to another user (lets call him Sheldon) on her Notebook, this information needs to be stored somehow. And because Amys machines do not necessarily sync with each other, her mobile phone may never know that following Sheldon is a thing now!

This problem could by solved by storing the “follow”-information in her public profile. Although, some users might not like everyone to know who to follow. Cryptographic things could be considered to fix visibility.

But then, users may want to “categorize” their friends, store them in groups or whatever. This information would be stored in the public profile as well, which would create even more noise on the network. Also, because cryptography is hard and information would be stored forever, this might not be an option as some day, the crypto might be broken and reveal all the things that were stored privately before.

Another solution for this would be that Amys devices would have to somehow sync directly, without others beeing able to read any of that data. Something like a CRDT which holds a configuration file which is then shared between the devices directly (think of a git-repository which is pushed between the devices directly without accessing a server on the internet). This would, of course, only work if the devices are on the same network.

As you see, I have not thought about this particular problem very much yet.

Discovering content

What I did not spend much time thinking about as well was how clients discover new content. When a user installs a client, this client does not know any IPFS peers – or rather any “social network nodes” where it can fetch user profiles/data from - yet. Even if it knows some bootstrap nodes to connect to, it might not get content from them if they do not serve any social network data and if the user does not know any hashes of user profiles. To be able to find new social network IPFS nodes, a client has to know their IPNS hashes – But how to discover them?

This is a hard problem. My first idea would be a PubSub channel where each client periodically announces their IPNS hashes. I'm not sure whether PubSub nodes must be connected directly. If this is the case, the problem just got harder. There's also the problem that this channel would be rather high-volume as soon as the network grows. If each client announces their IPNS hash every 15 minutes, for example, we get 4 messages per client each hour. That's already a lot of bandwidth if we speak about 1,000, 10,000 or even 100,000 clients! It is also an attack-vector how the system can be flooded. Not nice!

One way to think about this is that if only nodes which are subscribed to a topic do also forward the topics messages (like this comment suggests), we could reduce the time between “publishing” messages in the topic. Such a message would contain all IPNS hashes a node knows about, thus the amount of data would be rather much. As soon as the network grows, a node would need to send this message less and less often, to reduce the number of messages and bytes send. Still, if each node knows 10,000 nodes and sends this list once an hour, we get

bytes_per_hash = 46
number_of_nodes = 10_000
message_size = bytes_per_hash * number_of_nodes
bytes_per_hour = number_of_nodes * message_size

4,28 GiB of “I know these nodes” messages per hour. That does obviousely not scale!

Maybe each client should offer an API where other clients can ask them about which IPNS hashes they know. That would be a “pull” approach rather than a “push” approach then, which would limit bandwidth a bit. This could even be done via PubSub as well, where the channel name is generared from the IPFS instance hash, for example. I don't know whether this would be a nice idea. Still, this would need some “internet-facing” software where clients need to be able to talk directly to eachother. I don't know whether IPFS offers functionality to do this in a simple way.

Either way, I have no solution for this problem yet.

Why IPFS?

Platforms like scuttlebutt or movim also implement distributed social networks, why not use those? Also, why IPFS and not the dat protocol or something else?

That question is rather simple to answer: IPFS provides functionality and semantics other tools/frameworks do not provide. Most importantly the notion that content is immutable, but also full decentralization (not federation like with services like movim or mastodon, for example).

Having immutable content is a key point. The dat protocol, for example, features mutable content as it is roughly based on bittorrent (if I understood everything correctly, feel free to point out mistakes). That might be nice in some cases, though I think immutability is the way to go. Distributed applications or frameworks for distributed content with immutability as core concept are better suited for netsplit, slow connections and peer-to-peer applications. From what I saw from the last weeks and months of looking at frameworks for distributed content storage is that IPFS is way more mature than these other frameworks. IPFS is build to replace existing contents and to stay, and that's a nice thing to build applications on.

Remaining Questions

Some questions are remaining:

  • Is it possible to talk to a specific node directly in IPFS? This would be helpful for discovering content by asking nodes what profiles they know. It would also be a nice way for finding consensus when multiple devices have to agree on which node publishes a merge.
  • How fast is IPFS with small files? If I need to traverse a long chain of profile updates, I constantly request a small file, parse it and continue requesting the previous node in the chain. That should be fast. If it is not, we might need to introduce some “pack files” where a list of metadata-nodes is provided and traversing becomes unnecessary with. But that makes deleting content rather complicated, TBH.

That's all I can think of right now, but there might be more questions which are not yet answered.

Problems are hard in distributed environments

Distributed systems involve a lot of new complexity where we have to carefully think about details and how to design our system. New ways to design systems can be discovered by the “distributed approach” and new paradigms emerge.

Moving away from a central authority which holds the truth, the global state and also the data results in a paradigm shift we really have to be careful about.

I think we can do it and design new, powerful and fully distributed systems with user freedom, usability, user-convenience and state-of-the-art in mind. Users want to have a system which is reliable, failure proof, convenient and easy to use. They want to “always be connected”. I think we can provide such software. Developers want nice abstractions to build upon, data integrity, failure-proof software with simplicity designed into the system and reusable data structures and be able to scale. I think IPFS is the way to for this. In addition, I think we can provide free software with free data.

I do not claim to know the final solution to any of the problems layed out in this article. Its just that I think of them and would love to get an open conversation started on the whole subject of distributed social networks and problems that come with them.

And maybe we can come up with a prototype for this?

tags: #distributed #network #open-source #social #software

34c3 was awesome. I prepared a blog article as my recap, though I failed to provide enough content. That's why I will simply list my “toots” from mastodon here, as a short recap for the whole congress.

  • (2017-12-26, 4:04 PM) – Arrived at #34c3
  • (2017-12-27, 9:55 AM) – Hi #31c3 ! Arrived in Adams, am excited for the intro talk in less than 65 min! Yes, I got the tag wrong on this one
  • (2017-12-27, 10:01 AM) – Oh my god I'm so excited about #34c3 ... this is huge, girls and boys! The best congress ever is about to start!
  • (2017-12-27, 10:25 AM) – Be awesome to eachother #34c3 ... so far it works beautifully!
  • (2017-12-27, 10:31 AM) – #34c3 first mate is empty.
  • (2017-12-27, 10:46 AM) – #34c3 – less than 15 minutes. Oh MY GOOOOOOOOOD
  • (2017-12-27, 10:49 AM) – Kinda sad that #fefe won't do the Fnord this year at #34c3 ... but I also think that this year was to shitty to laugh about it, right?
  • (2017-12-27, 10:51 AM) – #34c3 oh my good 10 minutes left!
  • (2017-12-27, 11:02 AM) – #34c3 GO GO GO GO!
  • (2017-12-27, 11:16 AM) – Vom Spieltrieb zur Wissbegierig! #34c3
  • (2017-12-27, 12:17 PM) – People asked me things because I am wearing a #nixos T-shirt! Awesome! #34c3
  • (2017-12-27, 12:59 PM) – I really hope i will be able to talk to the #secushare people today #34c3
  • (2017-12-27, 1:44 PM) – I talked to even more people about #nixos ... and also about #rust ... #34c3 barely started and is already awesome!
  • (2017-12-27, 4:28 PM) – Just found a seat in Adams. Awesome! #34c3
  • (2017-12-27, 8:16 PM) – Single girls of #34c3 – where are you?
  • (2017-12-28, 10:25 AM) – Day 2 at #34c3 ... Yeah! Today there will be the #mastodon #meetup ... Really looking forward to that!
  • (2017-12-28, 12:32 PM) – Just saw ads for a #rust #wayland compositor on an info screen at #34c3 – yeah, awesome!
  • (2017-12-28, 12:37 PM) – First mate today. Boom. I'm awake! #34c3
  • (2017-12-28, 12:42 PM) – #mastodon ads on screen! Awesome! #34c3
  • (2017-12-28, 12:45 PM) – #taskwarrior ads on screen – #34c3
  • (2017-12-28, 3:14 PM) – I think I will not publish a blog post about the #34c3 but simply list all my toots and post that as an blog article. Seems to be much easier.
  • (2017-12-28, 3:15 PM) – #34c3 does not feel like a hacker event (at least not like the what I'm used to) because there are so many (beautiful) women around here.
  • (2017-12-28, 3:36 PM) – The food in the congress center in Leipzig at #34c3 is REALLY expensive IMO. 8.50 for a burger with some fries is too expensive. And it is even less than the Chili in Hamburg was.
  • (2017-12-28, 3:43 PM) – Prepare your toots! #mastodon meetup in less than 15 minutes! #34c3
  • (2017-12-28, 3:50 PM) – #34c3 Hi #mastodon #meetup !
  • (2017-12-28, 3:55 PM) – Whuha... there are much more people than I've expected here at the #mastodon #meetup #34c3
  • (2017-12-28, 4:03 PM) – Ok. Small #meetup – or not so small. Awesome. Room is packed. #34c3 awesomeness!
  • (2017-12-28, 4:09 PM) – 10 minutes in ... and we're already discussing pineapples. Community ftw! #34c3 #mastodon #meetup
  • (2017-12-28, 4:46 PM) – Limiting sharing of #toots does only work if all instances behave! #34c3 #mastodon #meetup
  • (2017-12-28, 4:56 PM) – Who-is-who #34c3 #mastodon #meetup doesn't work for me... because I don't know the 300 usernames from the top of my head...
  • (2017-12-28, 5:17 PM) – From one #meetup to the next: #nixos ! #34c3
  • (2017-12-28, 5:57 PM) – Unfortunately the #nixos community has no space for their #meetup at #34c3 ... kinda ad-hoc now!
  • (2017-12-28, 7:58 PM) – Now... Where are all the single ladies? #34c3
  • (2017-12-28, 9:27 PM) – #34c3 can we have #trance #music please?
  • (2017-12-28, 9:38 PM) – Where are my fellow #34c3 #mastodon #meetup people? Get some #toots posted, come on!
  • (2017-12-29, 1:44 AM) – Day 2 ends for me now. #34c3
  • (2017-12-29, 10:30 AM) – Methodisch Inkorrekt. Approx. 1k people waiting in line. Not nice. #34c3
  • (2017-12-29, 10:43 AM) – Damn. Notebook battery ran out of power last night. Cannot check mails and other unimportant things while waiting in line. One improvement proposal for #34c3 – more power lines outside hackcenter!
  • (2017-12-29, 10:44 AM) – Nice. Now the wlan is breaking down. #34c3
  • (2017-12-29, 10:57 AM) – LAOOOOLAAA through the hall! We did it #34c3 !
  • (2017-12-30, 3:45 AM) – 9h Party. Straight. I'm dead. #34c3
  • (2017-12-30, 9:08 PM) – After some awesome days at the #34c3 I am intellectually burned out now. That's why the #trance #techno #rave yesterday was exactly the right thing to do!
  • (2017-12-30, 11:35 PM) – Where can I get the set from yesterday night Chaos Stage #34c3 ??? Would love to trance into the next year with it!
  • (2017-12-31, 11:05 PM) – My first little #34c3 congress résumé: I should continue on #imag and invest even more time. Not that I do not continue it, but progress is slowing down with the last months of my masters thesis... Understandable I guess.

That was my congress. Yes, there are few toots after 28th... because I was really tired by then and also had people to talk to all the time, so little time for microblogging there. All in all: It was the best congress so far!

tags: #ccc #social

#matrix , #ipfs , #scuttlebutt and now #mastodon – We're living in awesome times! centralization < decentralization/federation < distribution! #lovefortech

(me, April 10, 2017, on mastodon)

The idea

With the rise of protocols like the matrix protocol, activitypub and others, decentralized social community platforms like matrix, mastodon and others gained power and were made real. I consider these platforms, especially mastodon and matrix, to be great steps into the future and am using both enthusiastically.

But can we do better? Can we do more distribution,? I think so!

So far we have a twitter-like thumbleblog platform (mastodon), a chat platform (matrix) and facebook-like platforms (diaspora and friendica) which are federated (some form of decentralization). I think we can make a completely distributed social network platform reality today.

Let me reiterate on that: I think, we can make a facebook/googleplus/etc clone which works without a central component, today. And I would even go one step further and state: All we need for this is IPFS (and related technology like IPLD and IPNS)!

This platform would feature personal profiles, publishing articles/posts/images/videos/voice messages/etc, instant messaging, following others, and all the things one would want in such a platform.

How would it work?

What do we need for this? Well, as stated before: not much! From what I can think of, we would need IPFS, some sort of public/private key functionality (which IPFS already has), a nice frontend-framework and that's basically it.

Let me tell you how I think such a platform would work.

The moment a user starts the application, the application would boot an IPFS node. The username and all other information about the profile are added to IPFS as structured data. If the profile changes because the user edits it, it is added to IPFS again, using IPLD to link to its previous version.

If a user adds a post to her profile, that post is added to IPFS as well and linked from the profile via IPLD. All other nodes are informed about the new content via pubsub and are free to pin the new content (the new profile version) or only cache it for a while (or to not care at all). The post itself could add a link to the IPNS hash of the profile under which the post is published. This way, a link from the post to the current version of the profile would always exist.

Because the profile always links to its previous version as well as to the post content, that would imply that the node the user of the profile runs would always keep all data the user adds to the network. As the data is only kept by links, the user is free to drop published content at any point in time.

This means that basically each operation would “generate” a new profile, which is of course published as an IPNS name. Following others would be a matter of subscribing to their “pub” channel (as in “pubsub”) or their IPNS name.

Chat

A chat application using IPFS is already implemented with orbit, so that's a matter of integrating one application into another. Peer-to-Peer (or rather Profile-to-Profile) messaging is therefore no problem.

Data format

All the data would be saved in a structured format. For example Json (though order of serialization is important, because of cryptographic hashes) or Bson or any other data serialization format that is widely adopted.

Sidenote: As long as it is made clear that any client must support all formats, the format itself doesn't matter that much. For simplicity of this article, I stick to Json (and also because it is most widely known).

A Profile(-version) would look roughly like this (consider 'ipfs hash' to mean “some kind of IPLD link” in this context):

{
  "previous": [ "<ipfs hash>" ],
  "post": {
    "type": "<post type>",
    "nodes": ["<ipfs hash>"],
    "metadata": {
      "date": "2017-12-12T12:00:00+0200",
      "tags": [],
      "category": "kittens",
      "custom": {}
    }
  }
}

Let me explain:

  • The previous key would point to the previous profile version(s). It would only contain IPFS hashes (Why plural, see below in “Multi-Device Support”).
  • The post key would contain information about the post published with this profile version.
    • The type of the post could be “article”, “image”, “video”... normal stuff. But also “biography” for the biography shown on the profile or other things. Even “username” would be possible, for adding a user name to the profile.
    • The nodes key would point to an IPFS hash containing the actual payload; either the text of the article (only one hash then) or the ipfs hashes of the pictures, the video(s) or other binary content. Of course, posts could be formatted using Markdown, reStructured Text or whatever format one likes to use. It would be a clients job to render it properly.
    • The metadata field would contain plain meta information, like published date, tags, category and also custom metainformation as key-value pairs.

Maybe a version attribute for protocol version could be added as well. Of course, this should be considered an incomplete example, as I almost certainly forgot things here.

The idea of linking the previous version of a profile from each new version of the profile is very much blockchain-like, of course, with the difference that nobody needs to fetch the whole chain but only the latest one to get a profile. The more content a viewer of the profile wants to see, the more she needs to traverse the graph of profile versions (and automatically caching the content for others). This would automatically result in older content beeing “forgotten” slowly (but the content would not be forgotten until the publisher itself and all other “pinners” drop it). Because the actual payload is not stored in the fetched data, the actual amount of data which is required to simply view a profile is rather small. A client could be configured to fetch all textual content of a file, but not more than 10 versions, or one screenpage, or something like that. The possibilities are endless here.

Federated component

One might think “If I go offline with my node, my posts are not accessible if nobody else is online having them”. And that's true.

That's why I would introduce a federated component, which would run a stripped-down version of the application.

As soon as another instance connects and a new post is announced via pubsub, the instance automatically pins or caches it. Of course, this would mean that all of these federated instances would pin all content, which is surely not nice. One (rather simple and maybe even stupid) option would be to roll a dice and make the chance that a post is pinned a 50-50 thing, or something like that. Also, posts which are pinned for a certain amount of time are most likely distributed well enough so the federated component nodes can drop them... maybe after 90 days, maybe after 10... Details!

Blockchain-Approaches

The fundamental problem with Blockchains is that every peer in the network hosts the complete content. Nobody benefits from that, especially if you think of a social network which should also work on mobile devices. With users loading up images, videos and other large blobs of data, a blockchain is the wrong approach.

That's why I think a social network on Euthereum, Bitcoin or any other crypto-currency/blockchain is not an option at all.

IPLD

IPLD can be used not only to link posts and profiles, but also to link from content to content. Namely to link from one post to another, from a post to an image, a video, a voice message,... but also to link from one post to a git commit, an euthereum transaction or any other IPLD-supported data structure.

Once nice detail is that one does not have to traverse these links. If a user sees a post which links to other posts, for example, she does not have to fetch these links to see the post itself, only if she wants to see the linked content. Caching nodes, on the other hand, can automatically traverse the whole graph and fetch all the content into their cache.

That makes a IPLD-based linking approach really beneficial.

Scuttlebutt

Scuttlebutt is a first step into the right direction. One can say what one wants about electron and the whole technology stack which is used in Scuttlebutt (and like or dislike the whole Javascript world), but so far Scuttlebutt seems like the first social network that is completely distributed.

I thought about whether it would be a great idea to port Scuttlebutt to use IPFS in the backend. From what I know right now, it would be a nice way of bringing IPFS and IPLD to the mix and therefor enhancing and extending the capabilities of Scuttlebutt itself.

I have not final conclusion on that thought, though.

Problems

There are several problems one has to think about when designing such a system.

Comments on Posts (and comments)

Consider you want to comment on a post. Of course you create new content, which links to the post you just commented. But the person who wrote the original post does not automatically link to your comment, so is neither able to find the comment (which could be solved via pubsub), nor are others able to find them.

The approach to this problem is simple: Notification about comments can be done via pubsub. And, if a user gets a notification about a new comment, she can approve it and automatically publish a new version of her post, with some added meta information:

  • A link to the comment
  • A link to the “old version of the content in IPFS”

Now, if a client fetches all posts of a profile, it resolves all entries for their newest version (so basically the one entry which does not link to an older version of itself) and only shows the latest versions of it.

Comments on comments (and so on) would be possible with the exact same approach. That would, of course, cause a whole tree of comments to be rebuild every time a new comment is added.

Maybe not the best idea in that regard.

Multi-Device Support

There are several problems regarding multi-device support.

Publishing content

Publishing from multiple devices with the same profile is possible – one just needs to import the private key for the signatures and the profile information to the other device.

Though, this needs some sort of merging mechanism if two posts are published from two devices (or more) at the same time / without the other devices beeing online to get notifications of the new point of truth.

As creating two posts from two seperate devices would create two new versions of the profile (because of IPLD linking), which means two points of truth suddenly exists, a merging-mechanism must be implemented to merged multiple points of truth for the profile. This could yield a rather large network of profile versions, but ultimatively a DAG (Directed Acyclic Graph).

        Profile Init
             ^
             |
          Post A
             ^
             |
          Post B <----+
             ^        |
             |        |
  +-----> Post C    Post C'
  |          ^        ^
  |          |        |
Post D    Post D'   Post D''
  ^          ^        ^
  |          |        |
  |          +--------+
  |          |
  |       Post E
  |          ^
  |          |
  +----------+
             |
             |
          Post F

A scenario like the one above (each Post also represents a new version of the profile) would be easy to create with three devices:

  1. One starts using the network on a notebook
  2. Post A published from the notebook
  3. Post B published from the notebook
  4. Profile added on the workstation
  5. Post C published from the notebook while off of the internet
  6. Post C' published on the workstation
  7. Profile added to the mobile phone (from the notebook)
  8. Post D published from the mobile while off of the internet
  9. Post D' published from the notebook while off of the internet
  10. Post D'' published on the workstation
  11. Notebook comes back online, Post E published, merging the state from Post D'' from the workstation and Post D' from the notebook itself.
  12. Phone comes online, one of the devices is used to publish Post F, merging the state from Post D and Post E.

In this scenario, there would still be one problem, though: If the profile is published as an IPNS name, branching off of versions would be problematic. If C is published while C' is published, both devices would publish their version as an IPNS name. Now, first come first serve applies. And of course that is problematic, because every device would always see one of the posts, but no device could see the other. Only at E (in the above example), when the branches are merged, both C and C' would be visible (though D wouldn't be visible as long as it isn't merged into the chain). But how does a device discover that there are two “current” versions which have to be linked to the new post?

So, discoverability is an issue in this approach. Maybe someone can come up with a clean and easy solution that would work for netsplit and all those scenarios.

One idea would be that there is a profile-key which is used to publish profile versions under an IPNS name as well as a device-key, which is used to announce profile versions as a seperate IPNS name. That IPNS name could be added to the profile, so each other device can find it and fetch “current” versions from each device. Only the initial setup of a new device would need to be made carefully then.

Or, maybe, the whole approach is wrong and another approach would fit better for this kind of problem. I don't know.

Subscribing

Another issue with multi-device support would be subscribing. For example, if a user (lets call her Amy) subscribes to another user (lets call him Sheldon) on her Notebook, this information needs to be stored somehow. And because Amys machines do not necessarily sync with each other, her mobile phone may never know that following Sheldon is a thing now!

This problem could by solved by storing the “follow”-information in her public profile. Although, some users might not like everyone to know who to follow. Cryptographic things could be considered to fix visibility.

But then, users may want to “categorize” their friends, store them in groups or whatever. This information would be stored in the public profile as well, which would create even more noise on the network. Also, because cryptography is hard and information would be stored forever, this might not be an option as some day, the crypto might be broken and reveal all the things that were stored privately before.

Deleting profile versions

Some time, a user may want to remove a biography entry or a user name she once published. Because all the information is chained in a long chain of versions, one may think that deleting a node is not possible. But it is!

Consider the following (simple) graph of profile versions:

A<---B<---C<---D<---E

If the user now wants to delete node C in this graph, she simply drops it. Now, E beeing the latest point of truth, one may think that finding B and A is not possible anymore. That's true. But why not shipping around this by creating a new profile version and linking the previous versions:

A<---B     <---D<---E<---F
      \                 /
       -----------------

Of course, D would now point to a node which does not exist. But that is not a problem. Indeed, its a fundamental concept of the idea – that content may be unavailable.

F must not contain new content. It even should not, because dropping F because of its content becomes harder this way. Also, new versions of the profile is simple and cheap.

Problems are hard in distributed environments

I do not claim to know the final solution to any of these problems. Its just that I think of them and would love to get an open conversation started on the whole subject of distributed social networks and problems that come with them.

tags: #distributed #network #open-source #social #software

After a full night of work (honestly I haven't slept in 26 hours now, working for my masters thesis), the next version of Cursive was published. I immediately had to drop a mail to the author and thank him for his wonderful work.

And this got me thinking...

Sometimes, being awesome to each other is not being friendly, acknowledging others work or enthusiasm about a thing or being tolerant about decisions others have made which might not fit ones own ideas of how life should be.

Sometimes, being awesome to each other can be a simple thing like just saying “Thank you” when it isn't expected.

I do that for quite a while now – dropping other open source authors a “Thank you” mail. But you can be sure – it happens rarely – it is still something special. I write maybe two or three “Thank You” mails per year.

So if you receive one, you can be sure: I deeply value your project!

tags: #life #social

It is over. 33c3 has happened. Post depression already started.

Thank you all for beeing there and making congress great again! It really was a pleasure to meet you all and have awesome discussions, seeing awesome people and listening to awesome talks!

tags: #ccc #social #network #life

Merry Christmas and a happy 33c3!

Last year on christmas I posted a blog post as well, so I figured I'll do it again this year.

This year, though, I have a little present for you all and myself. This blog got a new design and is also built with another tool. Why is that? Well, because I wanted to change something. So I ditched frog and opted for hugo.

Don't get me wrong, frog is an awesome tool and I really love it! Racket is a great language and I hoped to get my feet wet with it a bit while using frog. I didn't. Sadly.

But hugo is faster. And it got to the point where speed matters. I didn't want to use nanoc, though I know it and I've built some sites with it already (starting with my wiki.template which was initially built with nanoc and the imag-pim.org website which is build with nanoc). I wanted to use something new. And a quick query to staticgen.com told me that jekyll, hugo and gitbook were my options. Yes, there are more engines – I considered middleman as well. But no, I wanted something new. So hugo it is.

By now I'm amazed. It is really fast, all the things just work and the template I chose was almost perfect for my needs.

I did some work to make the URLs backward compatible, so all your links should work as expected. If not, tell me, I will fix it. What changed, though, is the location of the RSS file. So you have to use the new URL: RSS.

So here we go.

Have a nice Christmas eve. See you at 33c3!

tags: #life #social

imag just got a website, a mailinglist, a git repository setup and an IRC channel.

So I wanted to set up a website for imag for months already, but I finally got to it. I actually developed the website offline and it was almost done, and now it is online.

I wrote the website using nanoc, a static site generator. I used nanoc before and I also contributed some code to nanoc, so I already knew what to do and how to do it. I wrote a minimal theme for the website, I wanted to have it plain-text-ish, as imag itself is a plain text tool.

I managed to register an IRC channel on the freenode irc network, which is awesome. travis was set up to post to this IRC channel if a build succeeds or fails which is really convenient as well.

Of course we also have a mailinglist now. One can register there to contribute patches via mail and to ask questions. Of course there's not that much on the mailinglist yet, as we do not have a community around imag, yet. Anyways, there's the possibility to build one now, which is awesome, I guess!

Background

So what's running in the background here?

I registered a space at uberspace where also this very website is hosted, set up a gitolite and a cgit webfrontend for the git repositories.

The mailinglist is run by ezmlm, which was written by djb and is very well documented how to setup on an uberspace.

The domain was registered on united-domains.de.

Costs

Well, I pay these things from my own money (I can make some money this summer working at my university, so that's not a big problem).

Currently, I pay 19 Euro per Year for the domain and 2,50 Euro per Month for the Uberspace, but I will increase this as soon as there are more contributors on the git hosting or on the mailinglist (as soon as my setup causes actual workload on their servers) or as soon as I have a job, whatever comes first.

So that makes it 49 Euro per year with the current setup, which is affordable for me. As soon as I increase the monthly fee for uberspace (I will go to 5 Euro if I make my own money and no contributors and more if there are contributors), this will cost me 100 Euro per year if I give uberspace 6.75 Euro per month. Still not much, I guess.

As soon as I have to pay more than 100 Euro per year I guess I will add a “support this project” button on the website ... or something like this. Well, we'll see...

tags: #linux #open source #programming #rust #software #tools #imag #git #mailinglists #network #social

There is this game, slither.io. It is a really neat amusement (which btw also heats your linux device if you do not have a decent graphics driver) – and here's the trick how to be good at it.

So, this browser game is a bit like a multi-player snake. You start in a world where you are a small snake and there is food around and you can grow by eating. You can use your mouse to steer your snake. If you click the mouse button, your snake speeds up (you have to press and hold the mouse button). That costs you length, but if you do it well and you eat more than you lose length while speeding, everything is fine. You cannot die by this, as at some point, you simply stop speeding as you have no power left and you have to eat more to continue speeding. With your head you can cross your own body, so that's different from normal snake. What you cannot do – crossing other snakes. If you try to do that, your snake dies and turns into foo. Food gets also counted in points and the game shows a ranking who is #1 at the moment.

That's all to it.

So how can you grow fast? You could either try to eat a lot or (and here it comes) you trick other players (snakes) into running into your body by speeding before their head. The other snake then turns into food and you can eat it. Of course, longer snakes turn into more food than shorter ones.

If a snake dies and is at a certain size, all snakes (players) nearby try to get the biggest part of the cake by speeding to the new apprearing food. This often results in crashes – even more food!

The point is: How greedy are you? If you see a snake die and turn into food - you surely want to eat all the food. But you can only do that by speeding through the amount of food – increasing the likelihood of crashing into another snake and therefor turning into food yourself.

So here's the point how you can be really good at this game (I, myself, played it only about ten times before I got the longest snake with ~27k points) and I was able to hold this place for 10-15 minutes. How did I do that? By not beeing greedy – really, that's all to it! If you see another snake die, there's no problem with speeding to it and getting a huge amount of food out of it. But if you try to get as much as you can, you'll propably kill your snake.

tags: #social

I'm at the GPN16 at this very moment and I was amazed by one event which tool place while beeing outside and hacking on some Python code.

A guy came to me and told me that he just took a nice picture of me. He showed me the picture, which was indeed really nice, and asked whether this is okay or whether he should delete it.

I love that! He asked for permission, which is really awesome. Of course he did after he took the photo, but that's okay with me, as he took it with a camera and not an Android or Apple device or something like this, where I would assume instant sync to the cloud or something like that.

I really like beeing asked about this. I told him that I don't like beeing photographed and he deleted it instantly and also explained what he was doing (“Look here, Delete –> 'Delete this photo?' –> 'Yes' –> Photo deleted”).

Really awesome!

tags: #life #social #privacy

My 2 cents on 32c3 – a short recap.

It. Was. Awesome.

No, really. I had not that much opportunities to talk to strangers, though it was an awesome congress and I really look forward to 33c3!

I talked a bit to the rust guys (less than expected, but anyways, yeah), watched several talks, including “fnord Jahresrückblick” and the Perl talk. They were sooo awesome! I really recommend to watch them online!

Sadly, I couldn't watch the “Security Nightmares” talk, as my train left at 18:30, but I watched it online and it was one of the most awesome talks.

As always after c3, I was completely exhausted afterwards (3 hours less sleep per night). But it was worth it.

I hope I have the opportunity to attend next year as well.

tags: #ccc #life #media #network #social