In our course E-learning 3.0 (#el30) Stephen Downes had an interview with Pete Forsyth, Wikipedia-editor and Editor in Chief of the Signpost, a community newspaper covering Wikipedia and the Wikimedia movement. He also runs a blog about all things Wikipedia and wiki-based knowledge production. This was particularly interesting to me, since I use Wikipedia a lot and I like quoting it. The broader question here was how does Wikipedia avoid the fake news controverses and how do they arrive at consensus.

Since I’m used to quote Wikipedia, I was a bit shocked when Pete told us that people should not cite Wikipedia as such but rather the sources Wikipedia mentions to back up claims. There is no such thing as “Wikipedia”, there are people contributing articles or parts of articles, hopefully following the Wikipedia policies. It seems there are guidelines about what counts as a good source, which are similar to what journalists do when judging sources and their claims.

Still I do think it sometimes does make sense to quote Wikipedia, since it’s not just a totally decentralized platform where anything goes. There are policies, there is a Wikipedia-culture and standard practices. Especially for definitions, typically at the start of articles, it can make sense to refer simply to “Wikipedia” as often no references are available.

Of course it’s important to check the history of a Wikipedia entry and to have a look at the discussion page. The entry is an element to be judged on itself, and history, discussion and the quality and number of sources are all important elements.

I try to create a test site using the Interplanetary File System (IPFS). Since this involves using the command line, I use this command line cheat sheet.

Stephen Downes published instructional videos for the course E-learning 3.0 (#el30), however the instructions are Windows-only. Fortunately course participant Davey Moloney translated Stephen’s instructions into Mac-language.

After some ridiculous struggling with my file structure, I published my mini-site:

There are various ways to keep a IPFS-site online. Octavian Contis explains on his blog : “IPFS will host your website as long as it is accessed by other peers as it propagates to other nodes when it is accessed.” A simple way to keep your site up, suggested by Octavian,is accessing the hash generated for your content through the gateway of infura.io as follows:
https://gateway.ipfs.io/ipfs/<your hash> and change gateway.ipfs.io to ipfs.infura.io in the link.
This will access the content requested through the infura node and by doing so will permanently create a copy of the files. My mini-site:
https://ipfs.infura.io/ipfs/QmeBmdocCokJ1fMEYpK26uRb6b9vZYGUPUFXjHZgg217Uq/

There are ways to use IPFS and yet have your own domain name, I’ll check that out later.

For other versions of the decentalized web, have a look at my post about The Beaker Browser.

In our course E-learning 3.0 (#el30) we discussed assessment. Course facilitator Stephen Downes:

The traditional educational model is based on tests and assignments, grades, degrees and professional certifications. But with activity data we can begin tracking things like which resources a person read, who they spoke to, and what questions they asked.

We can also gather data outside the school or program, looking at actual results and feedback from the workplace. In the world of centralized platforms, such data collection would be risky and intrusive, but in a distributed data network where people manage their own data, greater opportunities are afforded.

So this explains why a course about e-learning contains modules about the decentralized web en data protection.

The task for this week however has to do with badges (which in a world of automated data capture would become less relevant, unless maybe as part of a gamification approach):

Create a free account on a Badge service (several are listed in the resources for this module). Then:

create a badge
award it to yourself.
use a blog post on your blog as the ‘evidence’ for awarding yourself the badge
place the badge on the blog post.

Stephen wrote a blog post about his own work with badges and about why he gets involved with badges.

I used Badgr, like Stephen did, avoided using Facebook or Google-logins and created and verified a badge which I show here:

Course task for E-learning 3.0 (#el30): use The Beaker Browser – an experimental browser for exploring and building the peer-to-peer Web – or the Interplanetary File System (IPFS) to put up some document. IPFS is a protocol and network designed to create a content-addressable, peer-to-peer method for storing and sharing hypermedia in a distributed file system.

In other words, both technologies enable you to publish documents online without using Google or hosting companies – but there are some serious limitations. Your document or site remains online as long as you or one of the readers/users don’t shut their computers. Which means that for most people, their stuff will go offline pretty soon. There are companies offering hosting for these “distributed” systems, and I used one of them for The Beaker browser project. Of course, one might ask, why going through all the hassle in order to end up again with a third party hosting your stuff?

I lacked time the last few days but fortunately I experimented with these technologies well before the assignment. I even started a blog via The Beaker Browser in order to tell about my first experiences. It’s hosted at hashbase, they “keep your files online, even when your computer is turned off.”

One of the great aspects of the course E-learning 3.0 (#el30) is the interaction between the participants. A network of blogs is discussing various elements of e-learning and the decentralized web. In my previous post I expressed a concern about using the blockchain in the context of managing your identity in a decentralized way. The blockchain is

an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way”. For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for inter-node communication and validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks, which requires consensus of the network majority. (Wikipedia)

There are at least serious tensions between the European General Data Protection Regulation (GDPR) and the idea of storing personal data on the blockchain. Just one issue is the right to be forgotten, which implies that people can demand to erase data, which is very problematic in a blockchain-context. There are workarounds, but these have their own disadvantages, as Andries Van Humbeeck explains on Medium.

Thanks to the #el30-network, an interesting discussion about the tension between blockchain and the right to be forgotten started in the comments on my previous post. One participant, Dorian, had this to say:

Roland’s concern is indeed a theme that The Circle discusses at length.

[For those who haven’t read it: in the novel, “The Circle” is a massive software corporation which builds its power and political influence on its totalitarian ownership over people’s personal data — but also enforces absolute “transparency” as people’s “source of trust”, by making privacy criminal, and publicity (or publicness) compulsory.]

Most existing blockchain solutions are certainly not appropriate to carry people’s private data, since they are basically gigantic public ledgers shared and copied integrally among the computers of all users…

However, some new solutions are popping up that attempt to preserve the main benefits of decentralised information (and thus, power) sharing, while giving individuals much more control over *what* they share with the rest of the world, and being less wasteful in terms of energy and information redundancy.

One of the most inspiring projects I know of is called Holochain (https://holochain.org/). Contrary to Bitcoin or Ethereum for example, it doesn’t rely on one huge ledger, but on a fractal concatenation of micro-ledgers connected into one common network. It looks extremely promising, not just for financial or IT purposes, but as a tool for fairer economic and socio-political systems… and, yes, as a better way to inscribe one’s identity into the world wide web!
(not to mention that it’s being built by good people, who are not interested in becoming billionnaires)
To learn more, I’d recommend the following articles, and their links:
https://open.coop/2018/06/14/holochain-perfect-framework-decentralised-cooperation-scale/
https://medium.com/h-o-l-o/we-caught-your-eye-articles-written-about-us-169d00998551

The articles and sites mentioned by Dorian are very interesting. The Holochain-story is slowly getting into the mainstream media.Some words of caution though: Holochain is a big idea and a very deep project. It’s also complicated to understand and only known about by a niche audience. There are other projects aiming at the decentralizing the web, like Tim Berners-Lee’s Solid. Looking for a definition avoiding terms such as ‘hash tables’ and ‘git’ I came back empty-handed.

Marketing

Some basic notions make people shiver, like the idea of sharing the spare capacity of your computers. Sharing a room for Airbnb became a success, but sharing your computer in times of fear for hackings seems a serious marketing challenge (I don’t think the “sandbox”-notion is something the average computer-user fully understands). Some ideas are great but never gain traction, as I experienced in virtual worlds and virtual reality. The actual development of Holochain seems to have started in 2016, yet, two years later the Holochain-site seems unable to explain what it is so that non-geeks would understand it. The homepage of holochain.org starts with a video where a developer starts mentioning Ruby on Rails. A journalist like me working for an interested but general audience finds not a single usable text snippet explaining what it’s all about.

Holo-host saves marketing

However, the holochain-community is being saved as far as marketing is concerned by a related project, the holo host. It’s literally a box (in various versions) which is pre-configured to act as part of a hosting-network in order to make it possible to host holochain-apps in a decentralized way. The video is less technical and actually does a good job explaining what it’s about:

A graphic explainer (click to enlarge):

Graphical scheme of the holo host box.

Finally some textual explanation:

Holochain is a new technology for distributed computing. Holo makes it possible for this technology to be used by mainstream internet users and spread faster.

Holochain is a platform infrastructure technology for distributed peer-to-peer applications, and Holo is the first application to be built on top of it. The purpose of Holo is to act as a bridge between the budding community of distributed Holochain apps, and the current centralized web. By creating an ecosystem and currency that enable distributed hosting services provided by peers, Holo brings access to distributed applications to the familiar web browser. The long-term goal is for Holo to run itself out of business by expanding the community built on and around Holochain apps until the majority of people switch over to using Holochain directly. But adoption of a new technology as fundamental as Holochain won’t happen overnight, so Holo is here ease that transition.

Actually you don’t need to buy the box to participate: “While the HoloPort is optimized for hosting the network and is the easiest way to be a part of the community and earn Holo fuel, you will be able to run the Holo software on a variety of devices. We’re selling HoloPorts in order to jump-start the Holo ecosystem with many stable, dedicated hosting devices, but we encourage users to join our community through any means at their disposal. Initially the Holo software will only be available for download and installation on computers running Linux; later macOS and Windows will be supported.”

I think I’ll buy a holo box, even though I fear convincing “the internet” to embrace such a revolutionary project, against the interests of established giant cloud providers, will be very challenging indeed.

If you want to see even more identity graphs, have a look at this funky video by the facilitator of our E-learning 3.0 course (#el30), Stephen Downes:

He also offered a reasonably clear presentation about identity, keys and authentication. If identity, online and offline, is ultimately also about possibility, aspirations, hopes and dreams as much as about facts and connections, identity data seem valuable enough to be stored away in secure way, out of reach for big internet companies wanting to collect our data and for the authorities.

Immutable. Really?

Inevitably, in these discussions the blockchain is an important reference. Just as I have doubts about big corporates and the authorities, I don’t feel at ease with blockchain technology. As for now, the technology seems cumbersome and difficult to understand for non-geeks. It is often presented as a magical-technological solution for issues of trust and societal unease, which can only really be understood by the high priests of technology. But more importantly, even if it works and there are no hidden power grabs by opaque groups and experts, are we sure we want our identities being defined in an immutable way?

In the European Union quite some people embrace the right to be forgotten. What if at some point in the future, when my identity evolves, I really want to erase parts of my former identities? Maybe erasing and destroying parts of your identity is something constitutive of forming a new identity. While it seems relatively straightforward to erase social media profiles and blogs posts, and while it’s even possible to get Google to erase personal information about me, this would not be possible using the blockchain which promotes an immutable data storage which can not be tampered with. Or maybe I overlook certain possibilities of the blockchain which would allow for such ‘right to be forgotten’ – please let me know if this is the case!

In our course E-Learning 3.0 (#el30) facilitator Stephen Downes asked us to create an identity graph. We should not use a node “me”, “myself” or similar. I made a mind map using Mindmaster, but I dislike the fact that the format seems to impose a central node. I put buddhism/humanism central, since that are core values and practices which permeate my life. Part of it, of course, is aspirational. Actually it would be nice to see the nodes moving, changing positions all the time. Stephen also asks some questions about the graph:

  • What is the basis for the links in your graph: are they conceptual, physical, causal, historical, aspirational? Answer: Well, all that.
  • Is your graph unique to you? What would make it unique? What would guarantee uniqueness? Answer: The combination of interests, passions maybe rather specific, but is it unique? Why should it be unique? And if there is such a thing as ‘typically me’, I guess it somehow eludes whatever description or graph.
  • How (if at all) could your graph be physically instantiated? Is there a way for you to share your graph? To link and/or intermingle your graph with other graphs? Answer: The graph is shared here, Mindmeister gives tools to share and mix. I could have embedded the graph, but I avoid that for security reasons.
  • What’s the ‘source of truth’ for your graph? Answer: Introspection, which is always dubious.

Picture of the mind map, click to enlarge, link above.

The task for this week in the course E-learning 3.0:

Create a model graph of some aspect of the E-Learning 3.0 course (it doesn’t have to be an actual graph, only a representation of what an actual graph might look like. We’ve already seen, eg., graphs on the relations between people in the course. Could there be other types of graphs?
In your model, consider how the states of the entities in that graph might vary. Consider not only how nodes might vary (eg., a person might have a different height over time) but also how the edges might vary (eg., a person might have a different strength of relation (calculated how?) with another person over time).
In your model, consider how knowledge about the changes in states in the graph might be used.

I just wonder whether the nodes of vertices should be people and the relations or edges should be between people (or their blogs). I’m more inclined to have ideas or topics as nodes and draw the relations between ideas. Such a graph would be very similar to a concept map.

How would the states of the entities in the graph or concept mqp vary? One could use a tool such as Mindmeister to have a timeline of versions. The first map, corresponding to the start of the course, probably would probably depict a limited number of rather general ideas, but as the course grows and the map gets more complex, the ideas would become more differentiated and the strength of the connections could also very.

Other tools such as TheBrain allow you to build giant databases in the form of mind maps or even one big all-encompassing map (I won’t elaborate here on the differences between mind maps and concept maps). It’s possible to have the system show the different entities in a random way, offering you the different perspectives on the map in quick succession.

Not sure whether the concept maps count as “graphs” but I think they do and I’d love to try this out.

Course participant Matthias does something similar on this blog, using Cmap. (He made his own tool http://condensr.de/ as Jenny says in the comments on this post).

Facilitator Stephen Downes of the course E-learning 3.0 (#el30) explains Graphs in this video. In his own words:

The graph is the conceptual basis for web3 networks. A graph is a distributed representation of a state of affairs created by our interactions with each other. The graph is at once the outcome of these interactions and the source of truth about those states of affairs. The graph, properly constructed, is not merely a knowledge repository, but a perceptual system that draws on the individual experiences and contributions of each node. This informs not only what we learn, but how we learn.

Graph vs. storytelling

What interested me particularly is the idea that stuff like this website can be represented as a graph, which is fundamentally different from a representation as a linear narrative. Graphs enable a view from a variety of perspectives. In education we are drawn towards the narrative, the causal explanation, the single actor. There is a critique of this in the book How history gets things wrong by Alex Rosenberg. Professor Rosenberg (Duke University) demonstrates how our addiction to narratives gets in the way of understanding history. Graphs can be a corrective on this.

The question is whether narratives should by definition be linear. Cannot we tell stories with different paths depending on choices made by the people formerly known as the audience – making them active participants?

A demonstration of graphs can be found in this post by Laura Ritchie. It demonstrates that when we demonstrate our learning with a graph we change our perception of what it is we are learning and how we are learning. It changes our understanding of where the knowledge comes from. The essence is that everything depends on something else.

In GitHub you have cloning, versioning, merging, forking which are manipulations in a graph which lead to something new. Machine learning builds on the characteristics of graphs. Aggregating, remixing, repurposing are skills which define the new way to learn.

A graph or network is not just a place to store and manipulate data, it’s a perceptual system. Thinking and perceiving are one and the same state, so Stephen argues.

In the course E-learning 3.0 our facilitator Stephen Downes had an interview with Ben Werdmuller who co-founded Elgg and Known, worked on Medium and Latakoo, and invested in innovative media startups to support a stronger democracy at Matter. These ventures are very related to all things decentralized web, the movement away from the big silos such as Facebook, enabling internet users to own and manage their own data. But what are all these projects about? A brief overview:

Elgg is an award-winning open source social networking engine that provides a framework on which to build all kinds of social environments, Known allows any number of users to post to a shared site with blog posts, status updates, photographs, and more, Medium is a blogging platform, Latakoo moves big files around (think video) and Matter Ventures is a media accelerator.

While Matter is regrouping right now and Werdmuller is no longer involved there, he is working with yet another open source start-up, Unlock. That project wants to enable people to earn money on the web without middlemen. Unlock is a protocol which enables creators to monetize their content with a few lines of code in a fully decentralized way, so it promises. It’s blockchain but beyond the virtual currency speculation.

Graphs

In our course E-learning 3.0 (#el30) Stephen Downes discussed the graph-concept: “This concept will be familiar to those who have studied connectivism, as the idea of connectivism is that knowledge consists of the relations between nodes in a network – in other words, that knowledge is a graph (and not, say, a sequence of facts and instructions).” The interview is part of the graph-module of the course.

The blockchain fits into this notion of “graph”, yet there qre some wicked concrete problems involved: privacy, payments for illegal goods and services. The decentralized web in itself (which is broader than “the blockchain”) can also provide a safe haven for communities spreading hate and fake information, Werdmuller himself published a post about the highly controversial Gab-platform. It’s not very clear to me how to solve these issues which are already very challenging for big centralized companies such as Facebook and Twitter and which seem to be even more difficult on a decentralized web with lots of completely independent operators with often hidden identities.

Metadata are very important and enable authorities and big corporates to learn a lot about web users, while the common user herself does not have the means for these analyses. The rich and powerful can protect information while the rest of us basically are condemned to use open platforms or to allow companies to use our personal data.

Access

The decentralized web in itself right now is also open, but projects such as Unlock are not only important for facilitating the monetization of web content, but can also enable users to regulate access to information. The web right now has no access control layer, in the same way that is does not have an identity or payments layer. Just as there are now projects to provide a control layer, there are possibilities for an identity layer as major browsers launch cryptocurrency-wallets but all this seems to be very early phase.

IndieWeb is the idea that one should be able to share, discuss and publish from your own website or even domain name, in a way that is not controlled by any single company. Why is it bad that Facebook, Twitter and LinkedIn control these activities? There is a good side, the fact that these companies get better in fighting hate speech. But because Facebook for instance depends on selling ads, they have a strict identity policy which requires users to publish using their official name. That name policy is bad for vulnerable communities and has a chilling effect on discussions and open speech, so Werdmuller explains.

Your own site

But how realistic is it to expect everyone to run a website? Many hosting companies are doing a lousy job in terms of security and of user friendly interfaces – the widely-used installation technology cPanel looks like ancient technology from the nineties. There is a new generation of technology such as the Helm Personal Email Server which could expand into general hosting. Werdmuller also hopes a hosting company will conquer the market with easy to use 21st century technology for normal people. Prices now are still too high. Internet Service Providers want the ordinary consumer to download stuff, but if people want to upload and have their own servers they’ll push to more expensive business solutions. So there is still a very real broadband issue, even in cities such as San Francisco.

Werdmuller explained the more technical basics of the IndieWeb such as webmentions and classes which add semantic meaning to html-tags enabling users to communicate on other websites without leaving their own site. I wrote about my experiences in the previous post Indiewebifying this site. The most important thing however is owning your own site – and allowing the web archive access.

The value of having your own website can be challenged. After all, “owning” stuff and “having control” seem like outdated values in an era when people want to share services rather than own goods (think cars and bikes in big cities). But yet a site can display your professional assets, without depending on the whims of companies such as Twitter, Facebook or Google. Using the IndieWeb-formats for your site helps the search engines (SEO) to find your content and give it a more prominent place. Werdmuller claims that every single career advance in his life comes from his blog.

Then again, nobody says you should limit yourself to just one blog or site. People have multiple interests, and while Facebook loves to capture the whole you, you can decide for yourself to publish several sites, linking or not linking between those sites and even use different names. That’s what I started doing by having this new site for decentralized web stuff rather than use my older MixedRealities site which deals more with virtual and augmented stuff. I publish about the #el30 course on both sites but because I use the same tag all the posts are being captured nicely by the gRSShopper-software the course uses.

Desirable and usable products

Talking about RSS (Rich Site Summary or Really Simple Syndication) and feeds, Werdmuller believes feedreaders have a major comeback. He has knowledge about various projects, but we’ll have to see whether those projects will really go mainstream – even the famous Google Reader never reached mass-audiences and Google cancelled the service to the dismay of academics, journalists and fans of the open web. It all has to do with having a product which is usable by people who barely understand URLs and which corresponds to a real need or desire in the market.

Werdmuller stresses the importance of usability. Web users in general are no experts in using URLs. Ethical software has to be usable by non-experts. Like journalism, research, webdesign has to be human centered. Work with a small group of the people you are trying to help, confront them with prototypes, get feedback and reiterate, so Werdmuller advises. OpenID and a lot of other technologies did not do that and so they failed, the identity is now something people deal with using their social media accounts.

So ethical software should be made in a human-centered way, it should also correspond to real desires and needs of the users, it should be feasible and sustainable in a financial way even when the development is not necessarily for profit. Which means that for creating our decentralized web, sponsorships and academic involvement might be crucial just as it was for the beginning of the internet and the web.