So how interesting is the course Web of data on Coursera (see previous post)? Well, first let me put this question in a broader context. I’m interested in the decentralized web from a learning perspective. One of the most fascinating projects in that regard is Solid, empowering people to separate their data from the applications that use it. It allows people to look at the same data with different apps at the same time.

I’m interested in the technology behind this project, and it seems this technology is rather complex. Look at this roadmap on GitHub for developers wishing to prepare themselves for Solid, it seems daunting for people who are just starting out. I guess it’s far less challenging for experienced web developers since Solid builds on web development fundamentals (think HTML, CSS, JavaScript and the major frameworks), and also on linked data technologies such as the Resource Description Framwork (RDF) and SPARQL, an RDF query language. RDF and SPARQL are less familiar to most developers.

The above mentioned course gives a broad yet thorough overview of RDF. It starts with deceptively simple topics such as the difference between web and internet and the separation between presentation and content. The course prepares you for another vision on the web, not as on a collection of pages, but as a collection of data which can be linked.

We discussed fascinating tools such as the Discovery Hub, an exploratory search engine built on top of Wikipedia and more precisely on top of the data extracted by DBpedia. I searched for Linked Data on Discovery Hub and it returned me 227 topics and 10 kinds of topics. One of these topics is important for the Discovery Hub itself: DBpedia, a project aiming to extract structured content from the information created as part of the Wikipedia project. This is linked with Wikidata, which is more and more replacing the infoboxes in Wikipedia. DBpedia wants to use the data provided by Wikidata and also help the project to become the editing interface for DBpedia.

We also learned using curl, the command line tool and library for transferring data with URLs and we learned about the Web service OpenCalais, which automatically creates linked data from the textual content one submits.

The second week of the course was (even) more technical and delved into the composition rules for RDF. The context and abstract syntax of RDF 1.1 is undoubtedly of importance for all those who actually will build things, but I pretty soon decided that I stick on a more generalist level – I simply don’t have the time nor the inclination to become a developer.

For those who want to try things out, part of the whole syntax hell is automated. RDF Translator is an online conversion tool enabling to transform RDF statements from one RDF syntax to another, e.g. from the RDF/XML syntax to the N3/Turtle syntax (and yes, the course stimulates you to study all those syntaxes,). There exist also web services to visualize data using these technologies such as Visual RDF. This stuff is not exactly slick nor user friendly, but then again I guess the core audience is academic.

All this is very interesting and helps me to understand what linked data is about and what kind of progress is being made. As usual I put myself in the center of the learning experience, focusing more on certain aspects and neglecting others for now – which presumably means I won’t pass all the tests – but who cares?

This Mooc on the Coursera platform – Web of data – is a joint initiative by EIT Digital, Université de Nice Sophia-Antipolis / Université Côte d’Azur and INRIA – introducing the Linked Data standards and principles that provide the foundation of the Semantic web.

We’ll study the principles of a web of linked data, the Resource Description Framework (RDF) model, the SPARQL Query language and the integration of other formats such as JSON-LD.

All of which seems important to really understand a project for the decentralized web such as Solid.

Ruben Verborgh (Ghent University in Belgium) explained eloquently why we need a decentralized web and why linked data are important during the FOSDEM-gathering in Brussels, Belgium. Have a look at the slide show, a video is upcoming.

Interoperability is a key issue and is enabled through Linked Data in RDF (if we all own and manage our own data, we still have to be able to share and integrate them, apps have to be able to share data). Every piece of data should be able to connect to every other piece of data and this is enabled by JSON-LD or JavaScript Object Notation for Linked Data.

I hope to get a better understanding of all this by attending the course.

Hundreds, I guess a few thousand, of developers gathered this weekend in Brussels, Belgium for FOSDEM, a conference promoting free and open source software. I mainly attended (by streaming video) the tracks about the decentralized web and privacy. Some first impressions, first the things which make me worry:

  • Nowadays it’s practically impossible to live in “full software freedom”. One needs proprietary software for work and daily life. Even the activists Bradley M. Kuhn and Karen Sandler (Software Freedom Conservancy) confess they can’t avoid proprietary software. For Karen it’s a very existential matter: she has to wear a defibrillator (with proprietary software).
  • Mobile devices tend to make software freedom ever more difficult. Also we became dependent on JavaScript loading fast and automatically (often proprietary) software in our browsers.
  • The open source community focuses on certain areas while neglecting other domains. Being mindful of this would be a first step.
  • The notion “surveillance capitalism” was mentioned a lot. In this day and age capitalism is about harvesting data, all the data they can get, no matter how personal. These data help to reduce uncertainty for corporations or political actors and ultimately the data help them to manipulate us.
  • We need regulators to intervene. As long as our friends and contacts stay on Facebook and Twitter by the hundreds of millions or billions, things will not change except for the one percent of hacktivists and geeks.

Reasons to be cheerful:

  • The European General Data Protection Regulation (GDPR) is an inspiring development but it’s not enough since GDPR on itself will not provide alternatives.
  • We get tools helping us to become more mindful about tracking and data-harvesting. Santiago Saavedra and Konark Modi talked about the Trackula-plugin. and the browser extension Local Sheriff.
  • Exodus Privacy also helps people to get aware of the tracking, they focus on Android.
  • There is a fledgling movement of self-hosters and federated hosters. LibreHosters is one of the various groups passionately working on this.
  • YunoHost tries to make server-administration something which is easy and fun to do. Like running your own site. It’s a good thing that technical experts realize they need to reduce the complexity if they want movements like the decentralized web to gain traction.
  • Roger Dingledine explained how the venerable Tor-project gets more traction (estimated daily users between two and eight million).
  • Also the search engine DuckDuckGo is doing great things (for instance a very interesting mobile application making you aware of trackers) and it gets more attention by worldwide users.
  • The Interplanetary File System (IPFS) is one of the projects working on a decentralized web, Paula de la Hoz presented the ambitious project for a new web.
  • Ruben Verborght presented another project for the decentralized web, Solid. The inventor of the web, Tim Berners-Lee, wants to build on the existing web technologies (ensuring interoperability) to enable people to manage their own data in data pods, giving app-builders the opportunity to compete on functionalities and not on harvesting data. I’ll post more about Solid in the coming days and weeks.
  • I noticed a lot of attention for ethics in the broadest sense of the word. Natacha Roussel and zeyev brought a feminist perspective when they analyzed the myth of the hero-coder who single-handedly solves world problems. They contrast this with a practice of care in collaboration, skill sharing and awareness of the impact of what one does.

PressED is a twitter conference (#pressedconf18/#pressedconf19) looking into how WordPress is used in teaching, pedagogy and research. #pressEDconf19 is happening on April 18th.

This is how it looked like last year. Using video and links to blog posts, lots of content can be discussed even while using a short-form medium such as Twitter.

The #el30 course about learning and the decentralized web may be finished for now (there will be new editions), I continue my exploration of all things decentralized web.

One of the projects out there to build a decentralized web, is Solid. The project is led by Tim Berners-Lee, the inventor of the World Wide Web. Solid would enable us to store all our personal data, but consequently the question is how we link data with apps and other people’s data (managed by us, the owners of our own data).

Luckily, through URLs and the Resource Description Framework (RDF ), it is possible to link data. It’s like hyperlinks on the classical web, but even more radical since we link pieces of data. It enables automated software to store, exchange, and use machine-readable information distributed throughout the Web, in turn enabling users to deal with the information with greater efficiency and certainty.

But how do we start using linked data? I asked professor Ruben Verborgh (Ghent, Belgium), and he suggested to have a look at Wikidata. Wikidata is a collaboratively edited knowledge base hosted by the Wikimedia Foundation.It’s a linked open data hub and a great place to start learning about data while contributing to the project.

I just started linking Wikidata to this site, by using the plugin Wikidata References for WordPress. Now it’s easy for me to link tags to Wikidata. It mainly involves looking up the Wikidata-reference for a tag (which one can do via the plugin) and adding a description. If Wikidata is incomplete, wrong or simply lacks the entry, I can add it in Wikidata.

The result for the reader is this for instance:

What you see are the ‘tag archives’ (one needs to click a tag to see it), here the tag ‘The Long Now Foundation’, which has the number of the corresponding Wikidata-entry and a short description. Wikidata of course provides more information and links.

This takes more time while creating a blog post (as I noticed writing this one), but it actually helps the learning process. Part of the open learning&publishing workflow I try to develop is making a lexicon, and now I realize that one can do so while contributing to and using one of the most interesting knowledge bases out there, Wikidata. I added this in the workflow I posted on GitHub.

Professor Ruben Verborgh (Semantic Web, Ghent University, Belgium, research affiliate at the Decentralized Information Group at MIT, US) is convinced web apps are deeply broken: 

they compete mainly on who harvests the most data, not on who provides the best experience or innovation. This has consequences for developers, who are unwillingly dragged into this rat race, and end users, whose privacy and potential is limited.

 

Verborgh promotes the Solid ecosystem, the latest project by web inventor Tim Berners-Lee, advocating for a different way of building apps that is better for everyone. He will present Solid at the FOSDEM-conference in Brussels, Belgium (2&3 February) and the organization already had an interview with Verborgh

On his own blog he recently published a long post Re-decentralizing the web, for good this time.

In that post he “explains the history of decentralization in a Web context, and details Tim Berners-Lee’s role in the continued battle for a free and open Web. The challenges and solutions are not purely technical in nature, but rather fit into a larger socio-economic puzzle, to which all of us are invited to contribute.”

Today I became a member of The Long Now Foundation. Here is what they say about themselves:

The Long Now Foundation was established in 01996* to develop the [Clock] and Library projects, as well as to become the seed of a very long-term cultural institution. The Long Now Foundation hopes to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common. We hope to foster responsibility in the framework of the next 10,000 years.

I discovered them because Bruce Sterling, one of my favorite thinkers and writers, gave a talk at a Long Now event. People involved with The Long Now are Kevin Kelly, Brian Eno, Stewart Brand… They organize inspiring talks and do great stuff like building a mechanical clock which should last 10,000 years. I think it’s a good thing to support them. They help people to think and to debate in a larger context, in a slower and more thoughtful way.

On February 2 and 3 thousands of developers will gather in Brussels, Belgium for FOSDEM, a conference promoting free and open source software. Things interesting me:

  • Collaborative information and content management application 
  • Decentralized Internet & Privacy 
  • Open document editors
  • Open media (video, images, audio)
  • Tool the Docs (writing, managing, rendering documentation)
  • Blockchain (of course)
  • Community
  • Python and Javascript

There will be keynotes about freedom and ethics, a bookshop etc. I’ll have to make hard choices about what to attend, but anyway, I’ll report about the event on this blog. 

The mood seems to be a bit dark. Let me quote the decentralized internet page:

PCs are less and less used while smartphones are soaring and data is collected and stored on servers on which we have very limited control as users. What happened to user’s freedom and privacy in the meantime? The outlook is not so great: we have less and less control over our digital environment. Network neutrality is heavily attacked and mainstream software products are usually proprietary or run on servers we don’t have control over. Modern technology has given the powerful new abilities to eavesdrop and collect data on people – with critical social and political consequences.

 

Happy New Year everyone! My plans in bullet points:

* Continue blogging on this site, following IndieWeb-formats.
* Learn how to use GitHub and version control in general, including using the command line interface.
* Stay involved with the IndieWeb-community.
* Prepare longer posts and articles using software developer procedures, using GitHub.
* Investigate how decentralized publishing could be applied (Beaker Browser, IPFS, Solid… )
* Integrate this in a general workflow for blogging and learning, based on teachings by Howard Rheingold and Stephen Downes.

More to come… In the meantime, have a look at an emerging article on GitHub, don’t hesitate forking and suggesting changes and additions.

For the last week of Learning 3.0 (#el30) we had a conversation with Silvia Baldiri, who works with the Fundación Universitaria Tecnológico Comfenalco (Colombia) and Universidad Internacional de la Rioja (Spain), and Jutta Treviranus, Director of the Inclusive Design Research Centre (IDRC) and professor at OCAD University in Toronto.

The conversation was about diversity and projects for young people with very different learning contexts and needs, such as the Social Justice Repair Kit. 

There is the common data capture in education but often this does not capture the complexity and the differences of the situations of different youth groups. A qualitative approach is needed, for instance allowing for storytelling. This is necessary since data show us what happened in the past and there is a risk that, when decisions are just based on these historical data, injustices from the past will be perpetuated.

We also discussed the three dimensions of “inclusive design”: take into account the fact that everybody is unique (“one size fits one”), use inclusive, open and transparant processes, realize that you are designing in a complex adaptive system. One of the benefits of implicating the most vulnerable groups is that these people are like the canaries in the coal mine: what happens to them is often a precursor to what happens to the society as a whole.

In case you wonder what all this has to do with decentralized tools, let me conclude with a long quote from the course:


McLuhan said that technology is a projection of ourselves into the community, so we need to consider how human capacities are advanced and amplified in a distributed and interconnected learning environment. Our senses are amplified by virtual and augmented reality, our cognitive capacities extended by machine vision and artificial intelligence, and our economic and social agency is represented by our bots and agents.

We are the content – the content is us. This includes all aspects of us. How do we ensure that what we project to the world is what we want to project, both as teachers and learners? As content and media become more sophisticated and more autonomous, how do we bind these to our personal cultural and ethical frameworks we want to preserve and protect?

These are tied to four key elements of the new technological framework: security, identity, voice and opportunity. What we learn, and what makes learning successful, depends on why we learn. These in turn are determined by these four elements, and these four elements are in turn the elements that consensus-based decentralized communities are designed to augment.

Learning therefore demands more than just the transmission or creation of knowledge – it requires the development of a capacity to define and instantiate each of these four elements for ourselves. Our tools for learning will need to emphasize and promote individual agency as much as they need to develop the tools and capacities needed to support social, ;political and economic development.