When we design for online learning, I wonder if we care enough about the impact of the algorithms underneath?
Take group allocation as one example.
When I started out in learning design, I thought of random group allocation as a refreshingly easy solution. I would show how simply pressing a few buttons in a learning management system could allocate classes of hundreds of names into small groups together. I would advise that it would be useful, to have a plan to approach the inevitable “I want to swap my group” request, but otherwise it’s a fully automated time saver.
The simplicity of hastening individuals into small groups and into private online group spaces and out-of-class meet-ups. A gentle stroll into social learning and group work.
What if instead of names on a screen, you were grabbing those real people by the arm and pushing them together into small rooms?
Once they were away, they were invisible.
Would you perhaps look closer at the groups before they were put together?
Check in on them in person?
Be concerned about the dynamics?
Question and possibly intervene if you witnessed power and control problems.
Feel their awkwardness keenly.
Does reyling only on that random allocation put a student at risk?
What protections and escapes are there?
What are our responsibilities?
Can you begin to imagine what can go wrong if we don’t care?