PressED is a twitter conference (#pressedconf18/#pressedconf19) looking into how WordPress is used in teaching, pedagogy and research. #pressEDconf19 is happening on April 18th.

This is how it looked like last year. Using video and links to blog posts, lots of content can be discussed even while using a short-form medium such as Twitter.

The #el30 course about learning and the decentralized web may be finished for now (there will be new editions), I continue my exploration of all things decentralized web.

One of the projects out there to build a decentralized web, is Solid. The project is led by Tim Berners-Lee, the inventor of the World Wide Web. Solid would enable us to store all our personal data, but consequently the question is how we link data with apps and other people’s data (managed by us, the owners of our own data).

Luckily, through URLs and the Resource Description Framework (RDF ), it is possible to link data. It’s like hyperlinks on the classical web, but even more radical since we link pieces of data. It enables automated software to store, exchange, and use machine-readable information distributed throughout the Web, in turn enabling users to deal with the information with greater efficiency and certainty.

But how do we start using linked data? I asked professor Ruben Verborgh (Ghent, Belgium), and he suggested to have a look at Wikidata. Wikidata is a collaboratively edited knowledge base hosted by the Wikimedia Foundation.It’s a linked open data hub and a great place to start learning about data while contributing to the project.

I just started linking Wikidata to this site, by using the plugin Wikidata References for WordPress. Now it’s easy for me to link tags to Wikidata. It mainly involves looking up the Wikidata-reference for a tag (which one can do via the plugin) and adding a description. If Wikidata is incomplete, wrong or simply lacks the entry, I can add it in Wikidata.

The result for the reader is this for instance:

What you see are the ‘tag archives’ (one needs to click a tag to see it), here the tag ‘The Long Now Foundation’, which has the number of the corresponding Wikidata-entry and a short description. Wikidata of course provides more information and links.

This takes more time while creating a blog post (as I noticed writing this one), but it actually helps the learning process. Part of the open learning&publishing workflow I try to develop is making a lexicon, and now I realize that one can do so while contributing to and using one of the most interesting knowledge bases out there, Wikidata. I added this in the workflow I posted on GitHub.

Professor Ruben Verborgh (Semantic Web, Ghent University, Belgium, research affiliate at the Decentralized Information Group at MIT, US) is convinced web apps are deeply broken: 

they compete mainly on who harvests the most data, not on who provides the best experience or innovation. This has consequences for developers, who are unwillingly dragged into this rat race, and end users, whose privacy and potential is limited.

 

Verborgh promotes the Solid ecosystem, the latest project by web inventor Tim Berners-Lee, advocating for a different way of building apps that is better for everyone. He will present Solid at the FOSDEM-conference in Brussels, Belgium (2&3 February) and the organization already had an interview with Verborgh

On his own blog he recently published a long post Re-decentralizing the web, for good this time.

In that post he “explains the history of decentralization in a Web context, and details Tim Berners-Lee’s role in the continued battle for a free and open Web. The challenges and solutions are not purely technical in nature, but rather fit into a larger socio-economic puzzle, to which all of us are invited to contribute.”

For the last week of Learning 3.0 (#el30) we had a conversation with Silvia Baldiri, who works with the Fundación Universitaria Tecnológico Comfenalco (Colombia) and Universidad Internacional de la Rioja (Spain), and Jutta Treviranus, Director of the Inclusive Design Research Centre (IDRC) and professor at OCAD University in Toronto.

The conversation was about diversity and projects for young people with very different learning contexts and needs, such as the Social Justice Repair Kit. 

There is the common data capture in education but often this does not capture the complexity and the differences of the situations of different youth groups. A qualitative approach is needed, for instance allowing for storytelling. This is necessary since data show us what happened in the past and there is a risk that, when decisions are just based on these historical data, injustices from the past will be perpetuated.

We also discussed the three dimensions of “inclusive design”: take into account the fact that everybody is unique (“one size fits one”), use inclusive, open and transparant processes, realize that you are designing in a complex adaptive system. One of the benefits of implicating the most vulnerable groups is that these people are like the canaries in the coal mine: what happens to them is often a precursor to what happens to the society as a whole.

In case you wonder what all this has to do with decentralized tools, let me conclude with a long quote from the course:


McLuhan said that technology is a projection of ourselves into the community, so we need to consider how human capacities are advanced and amplified in a distributed and interconnected learning environment. Our senses are amplified by virtual and augmented reality, our cognitive capacities extended by machine vision and artificial intelligence, and our economic and social agency is represented by our bots and agents.

We are the content – the content is us. This includes all aspects of us. How do we ensure that what we project to the world is what we want to project, both as teachers and learners? As content and media become more sophisticated and more autonomous, how do we bind these to our personal cultural and ethical frameworks we want to preserve and protect?

These are tied to four key elements of the new technological framework: security, identity, voice and opportunity. What we learn, and what makes learning successful, depends on why we learn. These in turn are determined by these four elements, and these four elements are in turn the elements that consensus-based decentralized communities are designed to augment.

Learning therefore demands more than just the transmission or creation of knowledge – it requires the development of a capacity to define and instantiate each of these four elements for ourselves. Our tools for learning will need to emphasize and promote individual agency as much as they need to develop the tools and capacities needed to support social, ;political and economic development.

In our effort of forming a community with the participants of E-learning 3.0 (#el30) we wrote blog posts reflecting on our learning experiences. Kevin Hodgson made a visualization of the posts, using the tool thinglink. 

Our esteemed course organizer, Stephen Downes, invited us for a video hangout – for some weird reason I was the only one to actually enter the hangout but some others intervened in text chat. This is the video: 

Stephen Downes questioned whether asking participants to post about their learning experience in this course was a good strategy to establish “community”. Why not just suggest to post a hashtag such as #el30community? By asking to post about our experiences, participants who for whatever reason would not do so, could end up feeling alienated. It reminded me of discussions we had in other communities about lurkers – are they part of the community? The consensus was they were part of it – lurking can be valuable. Still I’m glad I suggested to write a post – it generated new ideas and interactions. 

But how could the blockchain help to establish consensus in a trustless environment? There are theories and experiments involving Decentralized Autonomous Organizations (DAO), I hope to find out more about that in the coming days and weeks.

We also briefly mentioned the possibility of having a Community of Practice (CoP) on a more permanent basis. There is so much to explore: how to use various distributed technologies, how to use Docker and Jupyter Notebooks, which methodologies and pedagogies are best for various peer-to-peer learning contexts. One participant asked whether a central hub would be useful for such a CoP – in my opinion some hybrid model of a hub and a distributed environment would be interesting.

This is a nice example of using virtualization to enhance learning: Repl.it Multiplayer. The site enables you to “code with friends in the same editor, execute programs in the same interpreter, interact with the same terminal, chat in the IDE, edit files and share the same system resources, and ship applications from the same interface.”

You can also find programming courses on the platform. It illustrates the virtues of virtualization: “You can create a workspace in any number of languages, where you are given a container on a virtual machine where your code can run, sandboxed.”

Why would I use it? I have not the time nor the inclination to become a programmer. However, I’m very interested in cyber culture, and programming is part of that. Read some cyberpunk stories and chances are that you’ll encounter coders. These coders tend to be close to the machine, they are rather into C language than into high level languages such as Python. That alone makes me want to learn some basic C. I could use Multiplayer for that (even though you can also experiment with Python on the platform). 

I love to repurpose Moocs such as C Programming: getting started on the edX platform. Not because I will use it for my day job, but because it brings me a bit closer to cyberpunk literature. The same applies for the course Bitcoin and Cryptocurrencies on that same platform: I’m not really interested to learn how to trade crypto stuff, I’m attracted by the fact the course will also explore topics such as the Cypherpunk Movement. 

So what I would like to do is to find people interested into internet culture/ cyber culture/ digital humanities, repurpose existing learning materials to fit into a  cyber culture course of our own. We could use platforms such as Multiplayer to play with code, maybe even try out to build data driven art using virtualization technology ourselves (there is a beautiful handbook Teaching and Learning with Jupyter on Github).

It would be a connectivist Mooc for people interested in useless stuff such as philosophy and art, different from the endless offerings of business and job-oriented courses on the mainstream online platforms. 

Hat tip to Stephen Downes, organizer of the Mooc E-learning 3.0, who discussed Repl.it and the Jupyter-course in his newsletter OLDaily.

How do I feel about the course E-learning 3.0 (#el30)? Why did I participate to begin with? First of all, I liked the idea of participating in a project facilitated by Stephen Downes since I appreciate his newsletter and his pioneering work in developing and facilitating Massive Open Online Courses (MOOCs). I’m also intrigued by what comes next in communication and collaboration. Yet, I’ve many questions and doubts – which in itself is a positive outcome of the course. 

I made this post as part of a communal effort by #el30-students to express themselves about their learning. In a follow-up post I hope to react on what my co-learners posted. 

Decentralized

“The first phase of the internet  was based on the client-server model, and focused on pages and files. The second phase, popularly called Web 2.0, created a web based on data and interoperability between platforms”, so Stephen explained. A very important topic of the course is the shift in our understanding of content from documents to data; and second, the shift in our understanding of data from centralized to decentralized. It’s about emancipating yourself from the big internet companies who turn your data into a product they own. 

I had great fun starting out this new blog on Reclaim Hosting and doing this in the IndieWeb style, enabling an easy interaction with other blogs. I consider it a first, modest step in emancipating myself from the data collectors and traders. 

As a second step, I experimented with The Beaker Browser and the Interplanetary File System (IPFS). I like doing that, but I’m not yet convinced these projects (and others such as Solid by Tim Berners-Lee, Blockstack or Holochain) will actually get a mainstream following. It’s still very early phase, the proposed solutions require a considerable investment of time and effort by the users. 

I still have to experiment with other technologies we discussed such as Docker and Jupyter Notebook.  However, my interest in virtualization and software containers is not driven by any real need – which might explain why I did not yet try it out. For now I’m perfectly happy with Reclaim Hosting, WordPress and IndieWeb-plugins. 

Will decentralization and virtualization change the way we learn? I’m not sure. A network of blogs such as we have for this course surely helps me to get new perspectives and it’s very motivating. Do we need to have such a network on the IPFS, do we have to use dat-files (The Beaker Project) or do we have to collaborate using Blockstack-apps? As far as the immediate learning experience is concerned, I doubt whether it would feel very different. 

Data and assessment

It could be different in the future, when we collect far more data about our learning. It would feel more comfortable to manage those data ourselves rather than counting on big internet companies or other commercial entities to do this for us. If the hosting of such data would no longer be an issue, developers could compete again on the basis of functionalities of the apps they offer. They would also have to compete on the basis of the degree of trust and privacy they offer. 

However, why should I, being an adult learner (and getting old), collect and analyze “my data”? Suppose I’d study the Japanese language. I could collect data about the number of hours I spend learning Japanese, about exercises I make, and about progress I make in terms of courses I finish. What really interests me is whether I’m able to have a simple conversation in Japanese, whether I can read a newspaper article (for now, I can’t). In order to find that out, I just have to engage into a conversation and to read a newspaper, I don’t need fancy data collection and management. I don’t care about proving my skill to others – if an employer would recruit me for my Japanese skills, it would very soon be obvious how very limited these are, while other skills (say, reading and understanding Spanish) would be more satisfying and useful. 

The same applies for skills such as software programming. For tens of years now people have been recruited because they are coding wizards, eventually self-taught wizards. No blockchain-protected data were necessary to prove their skills. For pilots and in the medical professions, the current testing methods seem to guarantee (most of the time) a steady supply of people who you can trust (most of the time). 

So do we really need blockchain, dat-documents or IPFS or are these technologies solutions in search of a problem? I lack the knowledge and visionary talents of Stephen Downes, but as yet I’m not convinced these decentralization projects will actually conquer the world. But that will not stop me from trying out whatever they do. Also, I look forward to learn more about Solid, the Tim Berners-Lee project, since that builds upon the existing web technologies in order to create an environment providing sophisticated  personal data management and (I think) a read/write web. 

Synchronous and asynchronous

I enjoy the course and the interactions with other participants, but I’m a bit surprised about the lack of synchronous activities. The weekly video interview featuring Stephen and one or two guests don’t seem to lead to synchronous group interaction. The classical problem with such interactions is the difficulty to find a time slot which is convenient for a group of people who live in very different timezones. Another issue is the video conferencing software – does it enable people to virtually meet, to share screens, to work collaboratively on a document (like on a mindmap)? Fifty years after Douglas Engelbart’s Mother of all Demos, these affordances are not self-evident. I think developing such a synchronous collaborative environment would be an important tool for online learning. 

So how be one community? That’s this week’s task for the course E-learning 3.0. Do we even want to be one community? Do we want to celebrate our similarities or our differences? Do we need to celebrate anything at all?
I think the best way to “solve” this task is to find a viable minimal consensus. We self-organize, establishing for that occasion something like a community, but not by doing something which involves a tremendous investment of trust and long-term commitment.
Let’s follow the example of Wikipedia. Pete Forsyth explained how there is no need for Wikipedia community members to trust each other on some deep, all-encompassing personal level. It’s enough to trust a member to do a good job by providing information backed up by references to good sources.
So what could we do to affirm ourselves to be members of a loose #el30 community – which could eventually develop as a community of practice?

Concrete proposal

I suggest we all post about our experiences in this course. It would be a short or long piece about the content, the way it’s being organized, the way the learners did or did not interact with each other or how we reacted in blog posts and on social media.
Such a post seems like a natural thing to do, there are no good or bad posts, yet it would affirm our being together in this thing – #el30.

In our course E-learning 3.0 (#el30) Stephen Downes had an interview with Pete Forsyth, Wikipedia-editor and Editor in Chief of the Signpost, a community newspaper covering Wikipedia and the Wikimedia movement. He also runs a blog about all things Wikipedia and wiki-based knowledge production. This was particularly interesting to me, since I use Wikipedia a lot and I like quoting it. The broader question here was how does Wikipedia avoid the fake news controverses and how do they arrive at consensus.

Since I’m used to quote Wikipedia, I was a bit shocked when Pete told us that people should not cite Wikipedia as such but rather the sources Wikipedia mentions to back up claims. There is no such thing as “Wikipedia”, there are people contributing articles or parts of articles, hopefully following the Wikipedia policies. It seems there are guidelines about what counts as a good source, which are similar to what journalists do when judging sources and their claims.

Still I do think it sometimes does make sense to quote Wikipedia, since it’s not just a totally decentralized platform where anything goes. There are policies, there is a Wikipedia-culture and standard practices. Especially for definitions, typically at the start of articles, it can make sense to refer simply to “Wikipedia” as often no references are available.

Of course it’s important to check the history of a Wikipedia entry and to have a look at the discussion page. The entry is an element to be judged on itself, and history, discussion and the quality and number of sources are all important elements.