On Tuesday November 5 at Noon Pacific we’ll discuss the next chapters of the Peeragogy Handbook. We’ll be asked to reflect on our motivation. Why are we engaging in these activities? 
The Handbook is not just a book one reads in order to gather information. It’s about actually doing stuff. It asks you for instance to ask yourself questions such as what the problems are you want to tackle and how peer learning and production could help you. 

The book also mentions how peeragogues tried to build an accelerator. The fun thing is, in a previous version of this reading group we tried the same thing. We could discuss how the project went and what lessons the experience offers. Mind you, the accelerator program in my previous reading group was about ideation and offering feedback, there was no financial investment.

We’ll also discuss yet another experiment, the 5PH1NX case study. Students were incited to take their learning into their own hands. Things did not always work out smoothly. It seems just handing over a project is not necessarily a good approach, sometimes students want to build from scratch. The principles used for this experiment, called ‘the Law’, were as follows:

  1. You cannot “obey” or “break” The Law. You can only make good decisions or bad decisions.
  2. Good decisions lead to positive outcomes.
  3. Bad decisions lead to suffering.
  4. Success requires humanity.
  5. “For the strength of the Pack is the Wolf, and the strength of the Wolf is the Pack.” -Rudyard Kipling
  6. “The Way of the sage is to act but not to compete.” -Lao Tzu
  7. Be honorable.
  8. Have fun.
  9. Question.
  10. Sapere aude.

Is this a good Law? Anything we would change or add?

Let’s try to start the next chapter, about patterns (if we cannot go that fast, we’ll discuss this next week).
Patterns is anything with a repeated effect. In peeragogy, it’s to repeat processes and interactions that advance the learning mission. There are also anti-patterns, frequent occurences that are not desirable.
The architect and design theorist Christopher Alexander is the author of the book Pattern Language (1977). Read more about Pattern Language here.
Ward Cunningham, the inventor of the wiki, pointed out that wikis are tools to share and modify patterns. Maybe the experts among us can explain a bit more about Ward Cunningham and stuff such as Federated wikis and how these would relate to patterns.

This is a general outline we could use to describe patterns:

Title: Encapsulate the idea – possibly include a subtitle

Context: Describe the context in which it is meaningful. What are the key forces acting in this context?

Problem: Explain why there’s some issue to address here.

Solution: Talk about an idea about how to address the issue.

Rationale: Why do we use this solution as opposed to some other solution?

Resolution: How are the key forces resolved when the solution is applied?

What’s Next: Talk about specific next steps. How will the active forces continue to resolve in our project?
Again, the book is a call to action: it offers many patterns but also invites you to make your own pattern language. 

We’ll study many patterns during our reading of the Peeragogy Handbook and hopefully build some ourselves.

If you want to join the reading group, just let me know.

We launch a new series of meetings starting on Tuesday, October 29, at Noon Pacific, in Zoom. The schedule for those weekly meetings is as follows:

Noon – 12.30: those who want to update and augment Howard Rheingold’s book The Virtual Community are invited to do so. We’ll start working on the intro and first chapter. We just finished a first reading, more about that here.
12.30 – 13.00: let’s discuss and annotate the Foreword and the Introduction of the Peeragogy Handbook.
13.00 – 13.30: serendipity time! Let’s exchange projects, readings, encounters – maybe they’ll inspire some of the participants! During preparatory meetings people suggested to add stuff about DAO and virtual reality. These kinds of suggestions can lead to separate but connected projects which can eventually merge with the main project. 

For the annotation of the books we use hypothes.is  and google docs.

If you’re interested to join for all or some sessions, let me know. I’ll invite you to a mailing list where you will get the necessary links and practical information.

I failed to update this blog for quite some time, but I did not stop learning. I participated in a course facilitated by Howard Rheingold about Augmented Collective Intelligence. Some students did not want to stop after the five week course, and after a cumbersome decision taking process we decided to do a collaborative reading of Howard’s book The Virtual Community, which is available online.

We did two chapters a week for five weeks. I made summaries of each chapter:

Intro
Chapter 1
Chapter 2
Chapter 3 and 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Overview

The chapters were discussed on a closed forum (the reading was free but we wanted the participants to express themselves freely) and discussed each week during a Zoom conference. Howard attended a few sessions so we could ask questions to the author.

We did our five weeks of reading which was a great experience – why read on your own if you can learn so much more by reading with others? We experimented with Zoom and with the annotating system hypothes.is.

There will most probably be a follow-up: we’d like to update the links of the online version of the book and give more information about a number of persons and institutions mentioned in the book.

A new project could be a collaborative reading of The Peeragogy Handbook, which would be an interesting meta-experience since that book – also available online – is all about organizing online learning communities.

If you’re interested in reading the Handbook, please let me know!

So how interesting is the course Web of data on Coursera (see previous post)? Well, first let me put this question in a broader context. I’m interested in the decentralized web from a learning perspective. One of the most fascinating projects in that regard is Solid, empowering people to separate their data from the applications that use it. It allows people to look at the same data with different apps at the same time.

I’m interested in the technology behind this project, and it seems this technology is rather complex. Look at this roadmap on GitHub for developers wishing to prepare themselves for Solid, it seems daunting for people who are just starting out. I guess it’s far less challenging for experienced web developers since Solid builds on web development fundamentals (think HTML, CSS, JavaScript and the major frameworks), and also on linked data technologies such as the Resource Description Framwork (RDF) and SPARQL, an RDF query language. RDF and SPARQL are less familiar to most developers.

The above mentioned course gives a broad yet thorough overview of RDF. It starts with deceptively simple topics such as the difference between web and internet and the separation between presentation and content. The course prepares you for another vision on the web, not as on a collection of pages, but as a collection of data which can be linked.

We discussed fascinating tools such as the Discovery Hub, an exploratory search engine built on top of Wikipedia and more precisely on top of the data extracted by DBpedia. I searched for Linked Data on Discovery Hub and it returned me 227 topics and 10 kinds of topics. One of these topics is important for the Discovery Hub itself: DBpedia, a project aiming to extract structured content from the information created as part of the Wikipedia project. This is linked with Wikidata, which is more and more replacing the infoboxes in Wikipedia. DBpedia wants to use the data provided by Wikidata and also help the project to become the editing interface for DBpedia.

We also learned using curl, the command line tool and library for transferring data with URLs and we learned about the Web service OpenCalais, which automatically creates linked data from the textual content one submits.

The second week of the course was (even) more technical and delved into the composition rules for RDF. The context and abstract syntax of RDF 1.1 is undoubtedly of importance for all those who actually will build things, but I pretty soon decided that I stick on a more generalist level – I simply don’t have the time nor the inclination to become a developer.

For those who want to try things out, part of the whole syntax hell is automated. RDF Translator is an online conversion tool enabling to transform RDF statements from one RDF syntax to another, e.g. from the RDF/XML syntax to the N3/Turtle syntax (and yes, the course stimulates you to study all those syntaxes,). There exist also web services to visualize data using these technologies such as Visual RDF. This stuff is not exactly slick nor user friendly, but then again I guess the core audience is academic.

All this is very interesting and helps me to understand what linked data is about and what kind of progress is being made. As usual I put myself in the center of the learning experience, focusing more on certain aspects and neglecting others for now – which presumably means I won’t pass all the tests – but who cares?

This Mooc on the Coursera platform – Web of data – is a joint initiative by EIT Digital, Université de Nice Sophia-Antipolis / Université Côte d’Azur and INRIA – introducing the Linked Data standards and principles that provide the foundation of the Semantic web.

We’ll study the principles of a web of linked data, the Resource Description Framework (RDF) model, the SPARQL Query language and the integration of other formats such as JSON-LD.

All of which seems important to really understand a project for the decentralized web such as Solid.

Ruben Verborgh (Ghent University in Belgium) explained eloquently why we need a decentralized web and why linked data are important during the FOSDEM-gathering in Brussels, Belgium. Have a look at the slide show, a video is upcoming.

Interoperability is a key issue and is enabled through Linked Data in RDF (if we all own and manage our own data, we still have to be able to share and integrate them, apps have to be able to share data). Every piece of data should be able to connect to every other piece of data and this is enabled by JSON-LD or JavaScript Object Notation for Linked Data.

I hope to get a better understanding of all this by attending the course.

Hundreds, I guess a few thousand, of developers gathered this weekend in Brussels, Belgium for FOSDEM, a conference promoting free and open source software. I mainly attended (by streaming video) the tracks about the decentralized web and privacy. Some first impressions, first the things which make me worry:

  • Nowadays it’s practically impossible to live in “full software freedom”. One needs proprietary software for work and daily life. Even the activists Bradley M. Kuhn and Karen Sandler (Software Freedom Conservancy) confess they can’t avoid proprietary software. For Karen it’s a very existential matter: she has to wear a defibrillator (with proprietary software).
  • Mobile devices tend to make software freedom ever more difficult. Also we became dependent on JavaScript loading fast and automatically (often proprietary) software in our browsers.
  • The open source community focuses on certain areas while neglecting other domains. Being mindful of this would be a first step.
  • The notion “surveillance capitalism” was mentioned a lot. In this day and age capitalism is about harvesting data, all the data they can get, no matter how personal. These data help to reduce uncertainty for corporations or political actors and ultimately the data help them to manipulate us.
  • We need regulators to intervene. As long as our friends and contacts stay on Facebook and Twitter by the hundreds of millions or billions, things will not change except for the one percent of hacktivists and geeks.

Reasons to be cheerful:

  • The European General Data Protection Regulation (GDPR) is an inspiring development but it’s not enough since GDPR on itself will not provide alternatives.
  • We get tools helping us to become more mindful about tracking and data-harvesting. Santiago Saavedra and Konark Modi talked about the Trackula-plugin. and the browser extension Local Sheriff.
  • Exodus Privacy also helps people to get aware of the tracking, they focus on Android.
  • There is a fledgling movement of self-hosters and federated hosters. LibreHosters is one of the various groups passionately working on this.
  • YunoHost tries to make server-administration something which is easy and fun to do. Like running your own site. It’s a good thing that technical experts realize they need to reduce the complexity if they want movements like the decentralized web to gain traction.
  • Roger Dingledine explained how the venerable Tor-project gets more traction (estimated daily users between two and eight million).
  • Also the search engine DuckDuckGo is doing great things (for instance a very interesting mobile application making you aware of trackers) and it gets more attention by worldwide users.
  • The Interplanetary File System (IPFS) is one of the projects working on a decentralized web, Paula de la Hoz presented the ambitious project for a new web.
  • Ruben Verborght presented another project for the decentralized web, Solid. The inventor of the web, Tim Berners-Lee, wants to build on the existing web technologies (ensuring interoperability) to enable people to manage their own data in data pods, giving app-builders the opportunity to compete on functionalities and not on harvesting data. I’ll post more about Solid in the coming days and weeks.
  • I noticed a lot of attention for ethics in the broadest sense of the word. Natacha Roussel and zeyev brought a feminist perspective when they analyzed the myth of the hero-coder who single-handedly solves world problems. They contrast this with a practice of care in collaboration, skill sharing and awareness of the impact of what one does.

PressED is a twitter conference (#pressedconf18/#pressedconf19) looking into how WordPress is used in teaching, pedagogy and research. #pressEDconf19 is happening on April 18th.

This is how it looked like last year. Using video and links to blog posts, lots of content can be discussed even while using a short-form medium such as Twitter.

The #el30 course about learning and the decentralized web may be finished for now (there will be new editions), I continue my exploration of all things decentralized web.

One of the projects out there to build a decentralized web, is Solid. The project is led by Tim Berners-Lee, the inventor of the World Wide Web. Solid would enable us to store all our personal data, but consequently the question is how we link data with apps and other people’s data (managed by us, the owners of our own data).

Luckily, through URLs and the Resource Description Framework (RDF ), it is possible to link data. It’s like hyperlinks on the classical web, but even more radical since we link pieces of data. It enables automated software to store, exchange, and use machine-readable information distributed throughout the Web, in turn enabling users to deal with the information with greater efficiency and certainty.

But how do we start using linked data? I asked professor Ruben Verborgh (Ghent, Belgium), and he suggested to have a look at Wikidata. Wikidata is a collaboratively edited knowledge base hosted by the Wikimedia Foundation.It’s a linked open data hub and a great place to start learning about data while contributing to the project.

I just started linking Wikidata to this site, by using the plugin Wikidata References for WordPress. Now it’s easy for me to link tags to Wikidata. It mainly involves looking up the Wikidata-reference for a tag (which one can do via the plugin) and adding a description. If Wikidata is incomplete, wrong or simply lacks the entry, I can add it in Wikidata.

The result for the reader is this for instance:

What you see are the ‘tag archives’ (one needs to click a tag to see it), here the tag ‘The Long Now Foundation’, which has the number of the corresponding Wikidata-entry and a short description. Wikidata of course provides more information and links.

This takes more time while creating a blog post (as I noticed writing this one), but it actually helps the learning process. Part of the open learning&publishing workflow I try to develop is making a lexicon, and now I realize that one can do so while contributing to and using one of the most interesting knowledge bases out there, Wikidata. I added this in the workflow I posted on GitHub.

Professor Ruben Verborgh (Semantic Web, Ghent University, Belgium, research affiliate at the Decentralized Information Group at MIT, US) is convinced web apps are deeply broken: 

they compete mainly on who harvests the most data, not on who provides the best experience or innovation. This has consequences for developers, who are unwillingly dragged into this rat race, and end users, whose privacy and potential is limited.

 

Verborgh promotes the Solid ecosystem, the latest project by web inventor Tim Berners-Lee, advocating for a different way of building apps that is better for everyone. He will present Solid at the FOSDEM-conference in Brussels, Belgium (2&3 February) and the organization already had an interview with Verborgh

On his own blog he recently published a long post Re-decentralizing the web, for good this time.

In that post he “explains the history of decentralization in a Web context, and details Tim Berners-Lee’s role in the continued battle for a free and open Web. The challenges and solutions are not purely technical in nature, but rather fit into a larger socio-economic puzzle, to which all of us are invited to contribute.”

Today I became a member of The Long Now Foundation. Here is what they say about themselves:

The Long Now Foundation was established in 01996* to develop the [Clock] and Library projects, as well as to become the seed of a very long-term cultural institution. The Long Now Foundation hopes to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common. We hope to foster responsibility in the framework of the next 10,000 years.

I discovered them because Bruce Sterling, one of my favorite thinkers and writers, gave a talk at a Long Now event. People involved with The Long Now are Kevin Kelly, Brian Eno, Stewart Brand… They organize inspiring talks and do great stuff like building a mechanical clock which should last 10,000 years. I think it’s a good thing to support them. They help people to think and to debate in a larger context, in a slower and more thoughtful way.